I can't help but think about consciousness right now. What does it mean "to think"? How are we "conscious"? One question which has longed plagued artificial intelligence researchers is the very name of their field is almost meaningless because we cannot define what it means to be intelligent. There's a long standing thought game about intelligence. In 1950, British mathematician Alan Turing conceived of something we now call the "Turing Test"1.
In this test, a human judge interacts with two subjects. One is another human and the other is a computer. The judge can only communicate with them via text and he or she must be able to determine which is the human and which the computer. If the judge cannot tell, Turing argued that the computer should be considered "intelligent". Though Turing himself later doubted whether this was meaningful, it's still widely cited as an example of what an "intelligent" machine should be able to do.
> Hello, I am Eliza.
* Hello Eliza. I am Curtis.
> Do you believe it is normal to be Curtis?
* I certainly hope not!
> What does that suggest to you?
* I don't want to be normal.
> Why don't you want to be normal?
* Normal is boring.
> That is quite interesting.
Of course, this is all very simplistic but over 30 years ago when this software was created it was revolutionary. The original author went on to write a book disabusing people of the notion that there was anything even remotely approaching intelligence in Eliza. All Eliza did was analyze what you write and produce canned responses using programmed patterns and substitutions. This, however, begs an interesting question. How is this behavior different from what you and I do on a larger scale? Do you ever find yourself "borrowing" a phrase that someone else uses all the time? Does someone ask how you're doing and you say "fine" without thinking about it?
"Ah", you reply, "But that's different!" I learn these things. The computer can only handle what intelligent people program into it.
Not so fast. I have released an open source software package called AI::NeuralNet::Simple2. This is a very primitive software package for learning how neural networks operate. Neural networks are interesting because they have "neurons" connected by "synapses" which adjust their weight to stimulus. Mine is what is technically called a "feed forward back propagation network". In this type of network, information is "fed forward" through the network and the network outputs an answer. The correct answer is given to the network and the results are "back propagated" through the network. The network then adjust the weights of its synapses and you keep training the network until it learns to give correct answers. What is fascinating about this is how the network ultimately is able to give correct answers to questions it has never been given before. Further, these networks are very fast and can often be better at "guessing" answers than humans.
Reading about neural networks in games can be illuminating. One author described how they were trying to train their game armies using information from the book "The Art of War". In one battle, the human controlled armies showed up only to discover the computer controlled armies were nowhere to be seen. Then the human army was ambushed by the computer army which was hiding. They had never taught the computer to do that; it figured it out on its own. In another example a computer in Texas as part of an ambitious attempt to teach a computer enough knowledge to reason as a small child could asked the question "am I alive?"
Still, though these examples seem unsettling to some, we're still capable of tracing out the exact manner in which the computers behave, no matter how "human" it seems. There is no "thought" there, though it sometimes seems as if there is.
So perhaps what distinguishes us from computers is emotion. That's also something AI researchers are thinking about. If we can simulate emotions in computers, perhaps they can function better? Imagine a deep space probe noticing that its power levels have lowered. Perhaps it will "fear" death and starting shutting down all non-essential systems in an attempt to stay alive longer. Or perhaps a piece of software will get "angry" when it fails to complete a task but realizes the failure is caused by another piece of software over which it has no control. If it gets angry often enough with another piece of software will it start seeking alternate means of completing its task? By allowing computers to experience a rich range of emotions, who knows what complexities will emerge?
Still, this seems like we're faking it. These aren't emotions. The computers aren't thinking. And if I'm sitting in a room chatting with a computer via a chat client, the computer isn't intelligent, is it? Well, why not? How complex do I have to make that computer before we find we can no longer trace its behavior? What if we can eventually build a quantum computer with the computational ability of the human brain? Does the thing doing the thinking have to be sitting in a bucket of blood before we respect it? And with the advent of nanotechnology, it's conceivable that we could build up an analog of our brain, molecule by molecule, and eventually "switch it on". Will that be intelligent? Where do we draw the line? How? Why?
The problem with such idle speculation, of course, is what I referred to in the first paragraph. We don't know what it means to be intelligent. So many of us go through lives on autopilot, engaging in behaviors automatically and clinging to illogical beliefs in the face of evidence and yet we're still intelligent. I think.
1. The test was allegedly based on an old party game. In this game, a man and a woman go into a room and guests communicate with them only by written messages. The goal of the game was for the man to convince the party goers that he was actually the woman.