Thursday, June 2, 2011

Are Computers Isomorphic to Humans?

      Let me preface this essay by admitting that artificial intelligence is an old problem, and acknowledging that better minds have attacked the question before me (and certainly better minds will come after). Hence I can in no way expect to resolve the query at hand; only to jot down a few thoughts on the matter, seeking to move myself a bit closer to understanding truths which may ultimately be beyond human reach. Perhaps it is just as well that most “objective truth” is, at the end of the day, of this nature – for it may well be that we find meaning only by searching for it. Since I have probably crushed my own credibility enough on the matter at hand, 'tis high time to move on to actually getting lost in the fog, eh?
      No discussion of artificial intelligence would be complete without mentioning the ill-fated Alan Turing. His test, designed well before the computer era, is nonetheless still one of the most respected processes for determining whether or not a computer has “intelligence.” If you are not familiar with the Turing Test I will try and concisely explain it: Given a judge deciphering typewritten messages from two subjects (one a computer, one a human), have the judge attempt to distinguish which subject is the computer, and which the human. Supposedly, if the judge guesses wrong about half the time or more, we can at that point say that computers have intelligence or are close enough. Now granted, this idea for a test is not particularly novel (when one thinks of proving a thing sentient, what could be more natural than to compare the thing against himself?), but it has served as a good base to build thought experiments from. And, hey, it has become something of a tradition to hold such tests as a sideshow at computer conventions, especially those involving artificial intelligence. Anyway, let us move on to discussing some of the theoretical conclusions arrived at in attempting to build machines that could pass the test, as well as some of the problems inherent to the approach.
      First and foremost, the kind of answers a computer retorts back to user input hypothetically considered by Turing are pretty much as relevant today as they were in Turing's time. Certainly, a subject having the capacity to do arithmetic correctly at lightning speeds would arouse suspicions that said subject was of more metallic origins. Yet, as Turing pointed out, and as anyone who is familiar with programming “easier” settings for video games could tell you: it is quite simple to program a computer to occasionally give wrong answers, and to wait any given amount of time before replying. So clearly, making a computer behave “as poorly” at math as your typical human is not particularly difficult – and in learning this we have largely set such behaviors aside as being irrelevant to determining how intelligent computers really are. Indeed, we were probably barking up the wrong tree from the start by trying to define intelligence as the ability to make mistakes. Or perhaps I should say “appear to make mistakes.” More on that in a bit.
For now, let us consider how a computer might answer a question of aesthetic beauty. Suppose you showed the computer a painting and asked if it were beautiful or not. What kind of answer would give the computer away? To paraphrase the author of the main text we have been reading in this class (Godel, Escher, Bach by Douglas Hofstadter), does the computer have a large enough soul to appreciate Bach? We could simply give the computer a learning algorithm (such as a Support Vector Machine or a Neural Network) and a ton of examples of various types of music and artwork, letting it form itself a rough aesthetic scale. The computer could then turn around and spit out an answer about where on the scale a given piece of art fell, and have it be a rough enough consideration to pass muster with our Turing Test judge. (For a rough estimate is all humans really give about such a nebulous thing as “beauty” anyway). What other foils must a computer surmount to pass a Turing Test?
      How about language and communication? Surely a computer that gave responses which sound repetitive or canned would give itself away. Or, alternatively, a computer that could not piece together the intended meaning of a given phrase could never return to the user some useful commentary on the phrase; nevermind a meaningful response to the message. Yet, assuming our judge has an infinite amount of time to keep asking questions (English providing a framework to form an infinite number of different meaningful sentences), it stands to reason that a computer can only have a finite number of potential responses. And, while a meaningful sentence-forming algorithm seems possible hypothetically, humanity has yet to produce one that actually works well enough to imagine it fooling anyone for very long. Likewise, we have yet to perfect a parsing algorithm that can be guaranteed to parse any and every meaningful sentence, which is best illustrated by the fact that we still code in “Do What I Say” mode instead of “Do What I Mean” mode. Which is to say that programming languages are still not in the same category as natural language is.
      Now, one could write really complex code such as what Mr. Hofstadter demonstrated in his dialogue between the MIT graduate student and his pet AI. But, I submit that even such code as that would have problems dealing with an infinite number of different queries from a judge. For a more in-depth example, consider compilers. Compilers require their input to fit specific syntactic rules, along with a small number of semantic constraints. The only way to guarantee a compiler can interpret and take appropriate action on any given string is if the compiler is allowed to “lookahead” towards the next 'symbol' in the given string, an infinite number of times. But computers cannot do this, because all possible implementations will have limited amounts of lookahead possible. At the very least, I cannot say I have seen an algorithm capable of accomplishing either interpretation or generation of an infinite number of different meaningful phrases. Or more simply, there is no algorithm that can keep it up indefinitely. At some point the computer will come up with an unnatural enough sentence or phrase, or will respond in a completely unexpected way to a misinterpreted sentence, that it will be found out by our judge.
      Of course, all this discussion might be quite moot, because much of what we just asked a computer to do, a human may have just as much trouble doing. Does the idiom “lost in translation” ring a bell? Even when two perfectly healthy humans are speaking the same language, with words both know very well, it is not so uncommon to encounter misunderstandings and barriers to communication of meaning and intent over completely human means of discussion. So, if we are trying to make computers human-like, why even bother with considering linguistic operations past some decently approximate level? Perhaps we have lost sight of our goal: To write an artificial intelligence capable of fooling a human into believing that the artificial intelligence is not artificial after all. Well, sure enough, we can imagine a finite set of human-tailored responses the computer could use to fool a human over a short term, and even something more along the lines of a learning program might be used to good effect if the judge queries only within a particular subset of language. But on the other hand, if we remove the constraints upon the judge's time and number of queries, it is virtually impossible to indefinitely fool the judge into believing the computer responses and requests might have arisen from a human.
     So, perhaps you were wondering when and if I would ever pick back up that thread about the “appearance” of a computer making mistakes. Well, I suppose I have kept you waiting long enough. It is my opinion that, while trying to satisfy the Turing test has steered computer science in the direction of building better and better artificial intelligence in the truest sense of artificial, it is quite a pointless venture to begin a journey towards creating intelligence by trying to “fool” a human into thinking a computer is a human. In fact, if our goal is to produce intelligence equivalent to what we have as humans, we should be far more concerned with the implementation than anything it actually does. Who cares if this electrical brain can play checkers; is it self-conscious? Does it make decisions in a non-deterministic way? Does it have a will?
     But here we have encountered a bit of circular logic. For, here we are asking if computers can perform in a way we are not actually sure humans do. We need to ask if thought really is just a higher level representation of neurons firings, and not something more. Without the notion of free will, all behavior is simply a set of programmed responses to the environment, based upon some unique combination of genes and how our learning algorithms adapted themselves to our environments over time. In such a case the Turing test would actually be quite appropriate, because our own sense of ourselves as free-willed and non-deterministic beings would be utterly delusional. If we accept our own sentience in these terms, we can quite easily answer our original query (whether humans and computers are isomorphic) in the affirmative, but can we reason our way to such a conclusion? Perhaps, in the same way Escher's Dragon cannot actually become three-dimensional, and analogous to the way Godel proved any formal system sufficiently strong enough to reason about itself must always be incomplete (“incomplete” meaning there are well-formed propositions the system can make which cannot be decided within the system), we humans cannot find our way to a decision about whether we are sentient using a system of predicate logic. Granted, we cannot even prove that the aforementioned question is undecidable at this juncture, but I could hardly do justice to you the reader if I left us wandering through the fog without expressing any kind of solid ground to stand upon. So humor me in accepting the question as currently undecidable.
     Now we are allowed to do one of three things. We could leave the question as undecidable, and appreciate the zen of it all from a distance. We could accept that humans have sentience in the frame of context we have been discussing (having free will, among other concepts intentionally left somewhat nebulous). Lastly, we could accept the contrary notion, that humans are not sentient. I am going to ignore the first option because taking that route would leave us adrift in the same murky waters we were in before accepting the question was undecidable. Given that the immediate consequences of accepting the notion that humans are not sentient are quite depressing (essentially it would mean accepting a completely deterministic world, where any notions of individuality, achievement, right and wrong, etc are wholly delusional), I think I will hold off on accepting that conclusion. Where does this leave us? Accepting that humans have a brand of intelligence that includes free will. If we accept that, then we have pretty much answered our fundamental query in the negative, because computers and whatever output we get from them are ultimately limited to how we interpret their electrical signals; by definition this puts computers in an entirely different category from humanity.