Thursday, June 2, 2011

Are Computers Isomorphic to Humans?

      Let me preface this essay by admitting that artificial intelligence is an old problem, and acknowledging that better minds have attacked the question before me (and certainly better minds will come after). Hence I can in no way expect to resolve the query at hand; only to jot down a few thoughts on the matter, seeking to move myself a bit closer to understanding truths which may ultimately be beyond human reach. Perhaps it is just as well that most “objective truth” is, at the end of the day, of this nature – for it may well be that we find meaning only by searching for it. Since I have probably crushed my own credibility enough on the matter at hand, 'tis high time to move on to actually getting lost in the fog, eh?
      No discussion of artificial intelligence would be complete without mentioning the ill-fated Alan Turing. His test, designed well before the computer era, is nonetheless still one of the most respected processes for determining whether or not a computer has “intelligence.” If you are not familiar with the Turing Test I will try and concisely explain it: Given a judge deciphering typewritten messages from two subjects (one a computer, one a human), have the judge attempt to distinguish which subject is the computer, and which the human. Supposedly, if the judge guesses wrong about half the time or more, we can at that point say that computers have intelligence or are close enough. Now granted, this idea for a test is not particularly novel (when one thinks of proving a thing sentient, what could be more natural than to compare the thing against himself?), but it has served as a good base to build thought experiments from. And, hey, it has become something of a tradition to hold such tests as a sideshow at computer conventions, especially those involving artificial intelligence. Anyway, let us move on to discussing some of the theoretical conclusions arrived at in attempting to build machines that could pass the test, as well as some of the problems inherent to the approach.
      First and foremost, the kind of answers a computer retorts back to user input hypothetically considered by Turing are pretty much as relevant today as they were in Turing's time. Certainly, a subject having the capacity to do arithmetic correctly at lightning speeds would arouse suspicions that said subject was of more metallic origins. Yet, as Turing pointed out, and as anyone who is familiar with programming “easier” settings for video games could tell you: it is quite simple to program a computer to occasionally give wrong answers, and to wait any given amount of time before replying. So clearly, making a computer behave “as poorly” at math as your typical human is not particularly difficult – and in learning this we have largely set such behaviors aside as being irrelevant to determining how intelligent computers really are. Indeed, we were probably barking up the wrong tree from the start by trying to define intelligence as the ability to make mistakes. Or perhaps I should say “appear to make mistakes.” More on that in a bit.
For now, let us consider how a computer might answer a question of aesthetic beauty. Suppose you showed the computer a painting and asked if it were beautiful or not. What kind of answer would give the computer away? To paraphrase the author of the main text we have been reading in this class (Godel, Escher, Bach by Douglas Hofstadter), does the computer have a large enough soul to appreciate Bach? We could simply give the computer a learning algorithm (such as a Support Vector Machine or a Neural Network) and a ton of examples of various types of music and artwork, letting it form itself a rough aesthetic scale. The computer could then turn around and spit out an answer about where on the scale a given piece of art fell, and have it be a rough enough consideration to pass muster with our Turing Test judge. (For a rough estimate is all humans really give about such a nebulous thing as “beauty” anyway). What other foils must a computer surmount to pass a Turing Test?
      How about language and communication? Surely a computer that gave responses which sound repetitive or canned would give itself away. Or, alternatively, a computer that could not piece together the intended meaning of a given phrase could never return to the user some useful commentary on the phrase; nevermind a meaningful response to the message. Yet, assuming our judge has an infinite amount of time to keep asking questions (English providing a framework to form an infinite number of different meaningful sentences), it stands to reason that a computer can only have a finite number of potential responses. And, while a meaningful sentence-forming algorithm seems possible hypothetically, humanity has yet to produce one that actually works well enough to imagine it fooling anyone for very long. Likewise, we have yet to perfect a parsing algorithm that can be guaranteed to parse any and every meaningful sentence, which is best illustrated by the fact that we still code in “Do What I Say” mode instead of “Do What I Mean” mode. Which is to say that programming languages are still not in the same category as natural language is.
      Now, one could write really complex code such as what Mr. Hofstadter demonstrated in his dialogue between the MIT graduate student and his pet AI. But, I submit that even such code as that would have problems dealing with an infinite number of different queries from a judge. For a more in-depth example, consider compilers. Compilers require their input to fit specific syntactic rules, along with a small number of semantic constraints. The only way to guarantee a compiler can interpret and take appropriate action on any given string is if the compiler is allowed to “lookahead” towards the next 'symbol' in the given string, an infinite number of times. But computers cannot do this, because all possible implementations will have limited amounts of lookahead possible. At the very least, I cannot say I have seen an algorithm capable of accomplishing either interpretation or generation of an infinite number of different meaningful phrases. Or more simply, there is no algorithm that can keep it up indefinitely. At some point the computer will come up with an unnatural enough sentence or phrase, or will respond in a completely unexpected way to a misinterpreted sentence, that it will be found out by our judge.
      Of course, all this discussion might be quite moot, because much of what we just asked a computer to do, a human may have just as much trouble doing. Does the idiom “lost in translation” ring a bell? Even when two perfectly healthy humans are speaking the same language, with words both know very well, it is not so uncommon to encounter misunderstandings and barriers to communication of meaning and intent over completely human means of discussion. So, if we are trying to make computers human-like, why even bother with considering linguistic operations past some decently approximate level? Perhaps we have lost sight of our goal: To write an artificial intelligence capable of fooling a human into believing that the artificial intelligence is not artificial after all. Well, sure enough, we can imagine a finite set of human-tailored responses the computer could use to fool a human over a short term, and even something more along the lines of a learning program might be used to good effect if the judge queries only within a particular subset of language. But on the other hand, if we remove the constraints upon the judge's time and number of queries, it is virtually impossible to indefinitely fool the judge into believing the computer responses and requests might have arisen from a human.
     So, perhaps you were wondering when and if I would ever pick back up that thread about the “appearance” of a computer making mistakes. Well, I suppose I have kept you waiting long enough. It is my opinion that, while trying to satisfy the Turing test has steered computer science in the direction of building better and better artificial intelligence in the truest sense of artificial, it is quite a pointless venture to begin a journey towards creating intelligence by trying to “fool” a human into thinking a computer is a human. In fact, if our goal is to produce intelligence equivalent to what we have as humans, we should be far more concerned with the implementation than anything it actually does. Who cares if this electrical brain can play checkers; is it self-conscious? Does it make decisions in a non-deterministic way? Does it have a will?
     But here we have encountered a bit of circular logic. For, here we are asking if computers can perform in a way we are not actually sure humans do. We need to ask if thought really is just a higher level representation of neurons firings, and not something more. Without the notion of free will, all behavior is simply a set of programmed responses to the environment, based upon some unique combination of genes and how our learning algorithms adapted themselves to our environments over time. In such a case the Turing test would actually be quite appropriate, because our own sense of ourselves as free-willed and non-deterministic beings would be utterly delusional. If we accept our own sentience in these terms, we can quite easily answer our original query (whether humans and computers are isomorphic) in the affirmative, but can we reason our way to such a conclusion? Perhaps, in the same way Escher's Dragon cannot actually become three-dimensional, and analogous to the way Godel proved any formal system sufficiently strong enough to reason about itself must always be incomplete (“incomplete” meaning there are well-formed propositions the system can make which cannot be decided within the system), we humans cannot find our way to a decision about whether we are sentient using a system of predicate logic. Granted, we cannot even prove that the aforementioned question is undecidable at this juncture, but I could hardly do justice to you the reader if I left us wandering through the fog without expressing any kind of solid ground to stand upon. So humor me in accepting the question as currently undecidable.
     Now we are allowed to do one of three things. We could leave the question as undecidable, and appreciate the zen of it all from a distance. We could accept that humans have sentience in the frame of context we have been discussing (having free will, among other concepts intentionally left somewhat nebulous). Lastly, we could accept the contrary notion, that humans are not sentient. I am going to ignore the first option because taking that route would leave us adrift in the same murky waters we were in before accepting the question was undecidable. Given that the immediate consequences of accepting the notion that humans are not sentient are quite depressing (essentially it would mean accepting a completely deterministic world, where any notions of individuality, achievement, right and wrong, etc are wholly delusional), I think I will hold off on accepting that conclusion. Where does this leave us? Accepting that humans have a brand of intelligence that includes free will. If we accept that, then we have pretty much answered our fundamental query in the negative, because computers and whatever output we get from them are ultimately limited to how we interpret their electrical signals; by definition this puts computers in an entirely different category from humanity.

Wednesday, May 11, 2011

Dev Diary

     Spent most of this last week switching between work on developing a Check() hierarchy for validating xml importation, and a minesweeper app for my android app programming class.  The former project I can't discuss too much, only to say that I need to learn how to work with registry design patterns, and am not quite clear where the hierarchy is going to go.  The latter I am finding much more difficult than it seemed at first.

     Apparently, most everyone is treating the actual setup of the boxes manually instead of programattically, putting their positioning information into the xml by hand.  The way I envision it is more automated (but in turn, more difficult to figure out how to do), with the number and size of the boxes adapting to user input.

Wednesday, May 4, 2011

Dev Diary May 4th, 2011

      So, I got a job working as an intern working for a company by the name of Data In Motion as of this last Monday.  Not much to say about it as of yet except that there's a ton to the job, and I am currently... whelmed at the tasks ahead.  That is to say, it's certainly daunting, but I think if I take it one piece at a time (like one of my favorite Johnny Cash songs) I'll get to understanding it, and be able to make myself useful to my employers.  For the time being, they are doing a lot more for me than I am for them, and it is currently just a hope to be a good return on that investment.

     All that having been said, I probably won't be sharing my code from work-related projects here on my blog, unless given explicit permission to do so.  I am also starting a class for developing android applications, and may share some of the toy programs from that class (albeit if it looks even marginally profitable I'd be keeping mum).  So that's the situation if you were wondering when my next interesting tidbit of code would appear.

     And now I'm off to spend the rest of the evening getting used to some new equipment.  And to read up on marker, visitor, and singleton design patterns.  On a complete sidenote, I should remind myself to strongly suggest to my professors here at Westminster to implement a class focused on design patterns.

Thursday, April 28, 2011

SVM Tester

function [score iter correct falseNeg falsePos unsures] = TestVectorMachine( weight, kernel, labels, bias )

%Test function for vector machien -- takes in a set of training data and
%its labels and tests a given set of alphas and a bias against it
[xn xm] = size(kernel);
[yn ym] = size(labels);
[an am] = size(weight);

%if(xn ~= am || ym ~= xn)
  %  display('Sorry, this is an idiot proof function.  Try again!');
 %   return;
%end
falseNeg = 0;
falsePos = 0;
unsures = 0;
iter = 0;
for i = 1:xn
    fXi = (weight .* labels) * kernel(i,:) + bias;
    if (fXi * labels(i))  <= 0
        if(fXi > 0)
            falsePos = falsePos + 1;
        end
       
        if(fXi < 0)
            falseNeg = falseNeg + 1;
        end
       
        if(fXi == 0)
            unsures = unsures + 1;
        end
       
        iter = iter + 1;
    end
end

score = (xn - iter) / xn * 100;
correct = xm - iter;

Soft Margin one-norm SVM

function [ weights bias ] = TannSchmidVectorMachineSoftMarginUno( K, H, labels, C)
%Toggle details which kernel we use

[xm xn] = size(K);
[ym yn] = size(labels);

%scale C down
C = C / xm;

%check to make sure training & labels have same dimension and toggle is
%valid

if xm ~= ym
    display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
    return;
end

%allocate space for different parts
f = zeros(xm, 1);
A = zeros(2 * xm + 4, xm);
b = zeros(2 * xm + 4, 1);

%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
    A(i+4, i) = 1;
end
for i = 1:xm
    A(i+4+xm, i) = -1;
end

b = [0; 0; 1; -1; (C - 10^(-7)) * ones(xm,1); zeros(xm, 1)];
          
[weights v] = quadprog(H, f, A, b);

%find the bias
bias = GetSoftBiasUno(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);

save('recordedResults0', 'weights', 'bias', 'K');

Soft Margin 2-norm SVM

function [ weights bias ] = TannSchmidVectorMachineSoftMarginDos( K, H, labels, C)
%Toggle details which kernel we use

[xm xn] = size(K);
[ym yn] = size(labels);

%scale C down
C = C / xm;
H = (1/2) * (H + (1 / C) * eye(xm, xn));

%check to make sure training & labels have same dimension and toggle is
%valid

if xm ~= ym
    display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
    return;
end

%allocate space for different parts
f = zeros(xm, 1);
A = zeros(xm + 4, xm);
b = zeros(xm + 4, 1);

%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
    A(i+4, i) = -1;
end

b = [0; 0; 1; -1; zeros(xm, 1)];
          
[weights v] = quadprog(H, f, A, b);

%find the bias
bias = GetSoftBiasDos(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);

save('recordedResults0', 'weights', 'bias', 'K');

Hard Margin one-norm SVM

function [ weights bias ] = TannSchmidVectorMachineHardMarginUno( K, H, labels)
%Toggle details which kernel we use

[xm xn] = size(K);
[ym yn] = size(labels);

%check to make sure training & labels have same dimension and toggle is
%valid

if xm ~= ym
    display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
    return;
end

%allocate space for different parts
f = -ones(xm, 1);
A = zeros(xm +2, xm);
bias = zeros(xm +2, 1);

%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
for i = 1:xm
    A(i+2, i) = -1;
end
          
[weights v] = quadprog(H, f, A, bias);

%find the bias
bias = getHardMarginBias(weights, K, labels);

save('recordedResults0', 'weights', 'bias', 'K');

Soft Margin one-norm Bias Calculator

function [ bias ] = GetSoftBiasUno( weights, kernel, labels, C)
[xm xn] = size(kernel);
counter = 0;
bias = 0;

for i = 1:xm
    if weights(i) > (10^-10) && weights(i) < (C - 10^(-10))  %calculate first <w xi>
        sgnLastY = labels(i) > 0;
        partialSum = 0;
        for j = 1:xm
            partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
        end
       
        %reset partial sum
        wXi = partialSum;
        partialSum = 0;
       
        for j = i:xm
            if weights(i) > (10^-10) && weights(i) < (C - 10^(-10)) && sgnLastY ~= (labels(j) > 0)
              for j = 1:xm
                 partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
              end
             
              %save second <w xj>
              wXj = partialSum;
              bias = bias + -(wXi + wXj) / 2;
              counter = counter + 1;
            end
        end
    end
end

bias = bias / counter;

Soft-Margin 2-norm Bias Calculator

function [ bias ] = GetSoftBiasDos( weights, kernel, labels, C)
[xm xn] = size(kernel);
counter = 0;
bias = 0;

for i = 1:xm
    if weights(i) > (10^-10)  %calculate first <w xi>
        sgnLastY = labels(i) > 0;
        partialSum = 0;
        for j = 1:xm
            partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
        end
       
        %reset partial sum
        wXi = partialSum;
        partialSum = 0;
       
        for j = i:xm
            if weights(i) > (10^-10) && sgnLastY ~= (labels(j) > 0)
              for j = 1:xm
                 partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
              end
             
              %save second <w xj>
              wXj = partialSum;
              bias = bias + -(wXi + wXj) / 2 - (labels(i) * weights(i) - labels(j) * weights(j)) / (2 * C);
              counter = counter + 1;
            end
        end
    end
end

bias = bias / counter;

Hard Margin Bias Calculator

function [ bias ] = getHardMarginBias(weights, kernel, labels)
%returns the bias
[xm xn] = size(kernel);
counter = 0;
bias = 0;
for i = 1:xm
    if weights(i, 1) > 0.000000000001
        partialSum = 0;
        for j = 1:xm
            partialSum = partialSum + label(j) * kernel(i,j) * weights(j);
        end
        bias = bias + labels(i) - partialSum;
        counter = counter + 1;
    end
end

bias = bias / counter;

end

Gaussian Kernel

function [ result ] = GaussKernel( x, y, sigma )
result = norm(x - y)^2;
result = result / sigma;
result = exp(-result);

end

Default Kernel

function [ result ] = defaultKernel(x, y, A)
%One of many kernel functions.  Takes vectors x and y, returns kernel
%function as a dot product using a positive definite matrix A
[xm xn] = size(x);
[ym yn] = size(y);
[R isPosDef] = chol(A);
if isPosDef ~= 0 || xm ~= ym || xn ~= yn || xn == 1
    disp('sorry, this function is idiot proof.  Please enter in a positive definite matrix A');
    result = -1;
    return;
end

result = x * (A * y');

Kernel Creator

function [ K H ] = KernelKreator( training, labels, scale, toggle)

[xm xn] = size(training);
[ym yn] = size(labels);


if xm ~= ym || toggle < 0
    display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
    return;
end

K = zeros(xm, xm);
H = zeros(xm, xm);
iter = 0;
%build kernel based on toggle used
if toggle == 0 %use regular dot product
    for i = 1:xm
        for j = i:xm
            K(i,j) = (defaultKernel(training(i, :), training(j, :), eye(xn))) / scale;
            K(j, i) = K(i,j);
            H(i,j) = (K(i,j) * labels(i) * labels(j));
            H(j,i) = H(i,j);
            iter = iter + 1;
        end
    end
%put other toggles here for other kernels
elseif toggle == 1
    for i = 1:xm
        for j = i:xm
            K(i,j) = (defaultKernel(training(i, :), training(j, :), eye(xn))) / scale;
            K(i,j) = (K(i,j) + 1)^2;
            K(j, i) = K(i,j);
            H(i,j) = (K(i,j) * labels(i) * labels(j));
            H(j,i) = H(i,j);
        end
    end
elseif toggle == 2
    for i = 1:xm
        for j = i:xm
            K(i,j) = (defaultKernel(training(i, :), training(j, :), eye(xn))) / scale;
            K(i,j) = (K(i,j) + 1)^3;
            K(j, i) = K(i,j);
            H(i,j) = (K(i,j) * labels(i) * labels(j));
            H(j,i) = H(i,j);
        end
    end
elseif toggle == 3
    for i = 1:xm
        for j = i:xm
            K(i,j) = GaussKernel(training(i, :), training(j, :), scale);
            K(j,i) = K(i,j);
            H(i,j) = (K(i,j) * labels(i) * labels(j));
            H(j,i) = H(i,j);
        end
    end
end

Support Vector Machines

For our Applied Topics in Mathematics class we had to code up some basic versions of support vector machines.  One of my classmates and I coded the following 3:  A hard margin, one-margin maximal weight SVM and 2 soft-margin maximal margin SVMs (one-norm & two-norm versions).  The next few posts will be the MATLAB code of those machines.  Feel free to comment on them and offer any suggestions where appropriate.

Monday, January 31, 2011

Math: Discovered or Invented


            According to the dictionary definition function on google (which in this case accesses a dictionary put together by Princeton university) discovery is either “the act of discovering something” or merely “a productive insight.”  From the same source, invention is either “the creation of something in the mind” or “a new device or process resulting from study and experimentation.”  Which of these definitions best suits Mathematics?  Maybe a better question is whether invention and discovery are even mutually exclusive – that is, could it be the case that Mathematics involves both?  Let us work towards answering this question by first discussing the semantics of “discovery” and “invention” in more depth.  From there it would behoove us to contemplate the historical development of Mathematics, using David M. Burton’s textbook “The History of Mathematics” (which I will admit may be a fruitless effort, given that our knowledge of ancient civilizations is surprisingly limited).  Lastly we ought to discuss the larger philosophical theories and their logical consequences concerning whether Mathematics is discovered or invented – Platonism and Nominalism.  Naturally, I will follow this analysis up with my own concluding remarks.
            So, on to semantics!  If we are to take Princeton’s word for it, discovery can be merely “a productive insight” and invention “a new device or process resulting from study and experimentation.”  But then, where do we get insight if not from experimentation?  And if we decide upon a certain process based on trial and error, is the criterion used to judge its usefulness devoid of any productive insight?  It would seem from this line of reasoning that invention and discovery, while not perfect synonyms, are in many ways difficult to differentiate.  Therefore, let us decide upon some more useful definitions particular to the discovery or invention of Mathematics.  If we say “Mathematics is discovered” let us agree that this entails Mathematic principles and truths existing independent of whether or not any intelligent being thought about them or described them in a particular language.  (This is essentially Mathematical Platonism, but we will get to that later).  Likewise, “Mathematics is invented” would mean that the notion of performing Mathematical processes is entirely the product of human imagination (Nominalism is the primary philosophical theory associated with this belief).  A via media approach would be piecewise, stating that some parts are invented and some are discovered.  With our terminology clarified, let us move on to examining the relevant history.
            The oldest objects of Mathematical note mentioned by Burton are sticks and bones which he postulates were used as a primitive means to count.  A trek further along the timeline brings us to Egyptians and Babylonians, who apparently have developed more advanced Mathematical concepts such as adding and subtracting more than one at a time, multiplication, doubling, fractions, and even geometry.  Spin the globe and we arrive in the ancient Mayan culture which had evidently come up with a representation for zero.  A little further south and we would encounter Incans with complicated rope schemes for keeping track of taxes, among other things.  In one light, it seems that multiple cultures came to relatively similar beliefs about certain Mathematical concepts in a fairly independent manner.  This would support the idea that Mathematical truths and principles are totally independent of who is thinking about them and what their socio-cultural belief structure might be.  On the other hand, if one is to assume the stance of some evolutionists (that all humans originated in Africa with the same language) it would not be a far stretch to find early trade of ideas feasible.  This would serve to give some weight to the argument for invention over discovery.  Even the Biblical conjecture of a general confusion and dispersal after the Tower of Babel indicates the possibility that many Mathematical ideas could have been transferred at that time or prior; eliminating the necessity of independent discovery.  Which is it then – discovered or invented?  This line of inquiry, barring further archeological evidence, seems to be bereft of conclusive evidence to justify any particular stance.  Perhaps waxing philosophic will prove more fruitful.
            Mathematical Platonism, as a philosophical theory, consists of the following three theses:  existence, abstractness, and independence.  That is to say, it is the theory that there exist certain abstract Mathematical objects, whose existence is totally independent of whether or not humanity knows them and understands them.  Furthermore, it supposes that Mathematics, as a scientific process, is the observance of certain of these truths as axioms and definitions.  Theorems would then follow, using logic to determine their truth based upon accepted axioms and definitions.  If the chosen axioms and definitions are “true,” then the resulting system should accurately model the real world.  Naturally, Platonists allot some degree of invention to the divining of the proper axioms and definitions, but there is something of an implied assertion that there is a right “answer” to be found – a set of Mathematical laws that are perfectly consistent internally and work in perfect tandem with the natural laws of other branches of science.  It is worth noting that Mathematical Platonism does not come from Plato per se, but since it invests in the idea of abstract ideals existing to be discovered, it is related to Plato’s philosophy about physical forms versus metaphysical ideas.
            By contrast, Nominalism argues that abstract objects do not exist.  Or rather, they only exist in the mind of the particular Mathematician.  It would follow that any principles inferred from a given set of ideas would be limited to the Mathematician himself and would require exposition to others of the same profession (or, at least, would require similar circumstances and research to derive).  An important consequence of this view is that there is no “correct” Mathematical system to construct which would reflect all of reality accurately.  The necessity of experimental scientists to deny certain Mathematical truths, in practice, to attain correct results from their experiments would seem to lend credence to this conjecture.  Though, usually such considerations are taken based upon the limitations of the devices used (many such instances come to mind from Computer Science involving round-off errors that arise when representing infinite items in a discrete form).  Furthermore, under Nominalism, any Mathematics past a certain level of abstraction are in certain senses useless, since we remove it farther away from provability using empiricism.  Even internal consistency is no longer as much an issue since general Mathematical theories are no longer in vogue (only particular cases, since there are no universals).  Now that the two major philosophical theories on the subject have been elucidated, it is time to express where I stand
            Finding, from the historical perspective, that it seems much more plausible for significant Mathematical ideas to be developed later rather than sooner; that these ideas were apparently developed independently of one another; and that the Mayans’ use of zero predated the use of zero in western civilization by a few millennium, I endorse Platonism.  A more concrete answer is to say that while I do believe there is some creativity involved in coming up with the correct approach to certain axioms and definitions, the idea that multiple cultures in multiple eras, which had a relatively tiny possibility of meeting to share ideas, ended up with essentially the same concepts and processes seems somewhat absurd.  If such cultures came up with the same idea without contact between each other, it seems far more likely that they were simply looking at the same problems and coming up with the same math to solve the given problem.  That is, that the principles were there for the discovering, and would necessarily always give the same conceptual answer.  In addition (and while this may be something of a return to the earlier discussion of semantics) it would seem odd to call the Mathematical process from axiom to theorem anything but a “discovery process.”  Lastly, I am of the belief that the existence of abstract Mathematical objects is logically equivalent to the existence of abstract natural laws, such that the denial of the one is the denial of the other.  That is not to say that our current representations of said abstract objects in Mathematics and other sciences are free from all error, but rather such abstract objects must exist, and all of science is our journey to discover and properly describe them.

Tuesday, January 11, 2011

On Political Jargon: A Somewhat Immodest Proposal

Fellow denizens of the net:

     I move we realign a lot of terminology in our country's political vernacular to be more consistent through history.  Specifically, the following jargon needs to be addressed:

Liberal -- We ought to use this term only in respect to the liberal tradition present in English and American history, typically dated back to Edmund Burke (though really present in society at least as far back as King John was forced to sign the Great Charter).  This consists of a trend towards more freedom of speech and religion.  If any current political movement can be said to express the ideals of this tradition it would be the Libertarian Party.  The term has nothing to do historically with Democratic Socialism, and it would be inadvisable to continue using it in a manner that suggests a relation.

Moonbattery -- This term could potentially refer to a Lunar weapon emplacement, which is far too awesome to leave it as a derogatory term referring to those who believe that criminals follow laws, that government is a wealth creator, and/or those who simply suffer from BDS (Bush Derangement Syndrome).

Reform -- We ought to only apply this particular term when a plan of action demonstrably improves a specific program, either by decreasing costs or improving efficiency or effectiveness.  Making only cosmetic changes, or making changes which will obviously worsen the situation should not be considered as reform.

The Political Spectrum / The Right-Left Axis -- In regards to this, it really is an overgeneralization, and always has been.  I would recommend dropping it altogether from our vernacular.  If we must continue its use, it should refer to the classic distinction between the party of the court and the party of the country, republicans versus monarchists, or in the US, federalism versus states' rights.  If I were to put this in terms of modern pundits, think Bill O'Reilly as your typical lefty, Rush Limbaugh as probably dead center, and Nick Gillespie as your typical righty.  Such a spectrum would more accurately be in line with our history.

Grassroots/Astroturf -- I'm in favor of throwing both terms out, because while it ought to be seen as pathetic when you have to pay people to agree with you, I think we're intelligent enough to view _any_ idea thrown out in the public venue abstractly (regardless of who/where it came from).  It does not seem useful to make a distinction between opinions based on what occupation the person comes from.  Bad ideas are bad ideas, and good ideas are good ideas no matter who came up with them.

Expert -- There are certain subjects which require such specific layers of knowledge and understanding that I think most people implicitly understand that the person involved really does deserve to be taken credible.  Then there's everything else:  history, sociology, political science, economics, literature, art, psychology, business, marketing, etc.  The only expertise involved in this list of topics is what we all learned in high school -- the ability to analyze information and report on it.  I'm not sure even a differentiation between professional and amateur would be in order for any of the aforementioned "skills."

Welfare -- Basically, this ought to return to referring to infrastructure that maintains, promotes, and/or improves public well-being.  It should not refer to any private goods or services given to a specific individual.

Global Warming -- Either make this term refer only to the kind of pseudo-science Al Gore endorses, or use it only in the sense of a layman's term for the Greenhouse Effect.  I get tired of having to explain that I think AGW is a hoax, not trying to claim the aforementioned scientific phenomena doesn't happen.

And that's it...for now.  I may think of some more later, but readjusting this stuff would make politics a lot clearer and more consistent not only for us now, but in keeping a consistent historical context for progeny's sake.

Monday, January 10, 2011

Some MATLAB functions to create matrices representing steps in Gaussian Elimination

function interchangeRows = interchangeRows(i, j, n, m)
A = eye(n,m);
A(i,i) = A(j,j) = 0;
A(i,j) = A(j,i) = 1;
interchangeRows = A;

function multiplyRowbyScalar = multiplyRowbyScalar(j, alpha, n, m)
A = eye(n,m);
A(j, j) = alpha;
multiplyRowbyScalar = A;

function multiplyRowIbyScalarAddRowJ = multiplyRowIbyScalarAddRowJ(i, j, alpha, n, m)
A = eye(n, m);
A(j, i) = alpha;
multiplyRowIbyScalarAddRowJ = A;