Space Station

Thinking about Thinking

I can't help but think about consciousness right now. What does it mean "to think"? How are we "conscious"? One question which has longed plagued artificial intelligence researchers is the very name of their field is almost meaningless because we cannot define what it means to be intelligent. There's a long standing thought game about intelligence. In 1950, British mathematician Alan Turing conceived of something we now call the "Turing Test"1.

In this test, a human judge interacts with two subjects. One is another human and the other is a computer. The judge can only communicate with them via text and he or she must be able to determine which is the human and which the computer. If the judge cannot tell, Turing argued that the computer should be considered "intelligent". Though Turing himself later doubted whether this was meaningful, it's still widely cited as an example of what an "intelligent" machine should be able to do.

This seems a bit farfetched at first. Even in the late 1960s a software program called "Eliza" (play with a Javascript version here) was capable of carrying on limited conversations with people. Allegedly some people got pretty caught up in Eliza and how it behaved as a primitive psychotherapist. For example, here's a quick "chat" I just had with Eliza:

> Hello, I am Eliza.
* Hello Eliza. I am Curtis.
> Do you believe it is normal to be Curtis?
* I certainly hope not!
> What does that suggest to you?
* I don't want to be normal.
> Why don't you want to be normal?
* Normal is boring.
> That is quite interesting.

Of course, this is all very simplistic but over 30 years ago when this software was created it was revolutionary. The original author went on to write a book disabusing people of the notion that there was anything even remotely approaching intelligence in Eliza. All Eliza did was analyze what you write and produce canned responses using programmed patterns and substitutions. This, however, begs an interesting question. How is this behavior different from what you and I do on a larger scale? Do you ever find yourself "borrowing" a phrase that someone else uses all the time? Does someone ask how you're doing and you say "fine" without thinking about it?

"Ah", you reply, "But that's different!" I learn these things. The computer can only handle what intelligent people program into it.

Not so fast. I have released an open source software package called AI::NeuralNet::Simple2. This is a very primitive software package for learning how neural networks operate. Neural networks are interesting because they have "neurons" connected by "synapses" which adjust their weight to stimulus. Mine is what is technically called a "feed forward back propagation network". In this type of network, information is "fed forward" through the network and the network outputs an answer. The correct answer is given to the network and the results are "back propagated" through the network. The network then adjust the weights of its synapses and you keep training the network until it learns to give correct answers. What is fascinating about this is how the network ultimately is able to give correct answers to questions it has never been given before. Further, these networks are very fast and can often be better at "guessing" answers than humans.

Reading about neural networks in games can be illuminating. One author described how they were trying to train their game armies using information from the book "The Art of War". In one battle, the human controlled armies showed up only to discover the computer controlled armies were nowhere to be seen. Then the human army was ambushed by the computer army which was hiding. They had never taught the computer to do that; it figured it out on its own. In another example a computer in Texas as part of an ambitious attempt to teach a computer enough knowledge to reason as a small child could asked the question "am I alive?"

Still, though these examples seem unsettling to some, we're still capable of tracing out the exact manner in which the computers behave, no matter how "human" it seems. There is no "thought" there, though it sometimes seems as if there is.

So perhaps what distinguishes us from computers is emotion. That's also something AI researchers are thinking about. If we can simulate emotions in computers, perhaps they can function better? Imagine a deep space probe noticing that its power levels have lowered. Perhaps it will "fear" death and starting shutting down all non-essential systems in an attempt to stay alive longer. Or perhaps a piece of software will get "angry" when it fails to complete a task but realizes the failure is caused by another piece of software over which it has no control. If it gets angry often enough with another piece of software will it start seeking alternate means of completing its task? By allowing computers to experience a rich range of emotions, who knows what complexities will emerge?

Still, this seems like we're faking it. These aren't emotions. The computers aren't thinking. And if I'm sitting in a room chatting with a computer via a chat client, the computer isn't intelligent, is it? Well, why not? How complex do I have to make that computer before we find we can no longer trace its behavior? What if we can eventually build a quantum computer with the computational ability of the human brain? Does the thing doing the thinking have to be sitting in a bucket of blood before we respect it? And with the advent of nanotechnology, it's conceivable that we could build up an analog of our brain, molecule by molecule, and eventually "switch it on". Will that be intelligent? Where do we draw the line? How? Why?

The problem with such idle speculation, of course, is what I referred to in the first paragraph. We don't know what it means to be intelligent. So many of us go through lives on autopilot, engaging in behaviors automatically and clinging to illogical beliefs in the face of evidence and yet we're still intelligent. I think.


1. The test was allegedly based on an old party game. In this game, a man and a woman go into a room and guests communicate with them only by written messages. The goal of the game was for the man to convince the party goers that he was actually the woman.

2. It's based on the neural network in the excellent book AI Application Programming by M. Tim Jones.

  • Current Mood: thoughtful thoughtful
That's an interesting note. Thanks.

The perceptions of others definitely affects the "humanness" of things, but often in very strange ways. Consider cartoon characters. It's standard for them to "rear back" before zooming off. This "rearing back" is completely unnatural but it's a standard convention in cartooning to give humans a sense of what's going on.

Another example was in an AI research project where creatures inhabited a world and had emotions programmed in. One creature, due to a programming error, had an annoying habit of banging its head on the ground over and over. Though this was not natural behavior people viewing the software routinely said this creature seemed the most realistic because of this odd quirk.
You hit a pet peeve.

To stick closer to the topic, I had a summer school at Oxford this summer, and the final "mystery lecture" was titled "Artificial Life". It was by a teacher who had been in a postgrad AI program. He did a few slides explaining some of the concepts, then closed PowerPoint and showed us a few programs—a fish-schooling simulation with emergent behavior, learning and genetic simulations of animals, and so on.

But the most impressive (to me) demo he showed was of a neural net. He taught it XOR, right in front of our eyes. He even had a graphical display showing the results, with white representing one and black zero. You could see on the display as the net rapidly learned the answer (rendered here in glorious ASCII art):
 1 +----------------------+
   |      ....::::XXXX####|
   |    ....::::XXXX######|
   |  ....::::XXXX######XX|
   |....::::XXXX######XXXX|
   |..::::XXXX######XXXX::|
   |::::XXXX######XXXX::::|
   |::XXXX######XXXX::::..|
   |XXXX######XXXX::::....|
   |XX######XXXX::::....  |
   |######XXXX::::....    |
   |####XXXX::::....      |
 0 +----------------------+
   0                      1
He taught it a few other logical functions, and then taught a larger net to decide if a point was inside or outside a parabola:
   +----------------------+
   |###X:            :X###|
   |###X:.          .:X###|
   |####X:          :X####|
   |####X:          :X####|
   |####X:.        .:X####|
   |#####X:        :X#####|
   |#####X:.      .:X#####|
   |######X::.  .::X######|
   |#######XX::::XX#######|
   |#########XXXX#########|
   |######################|
   +----------------------+
He also used his neural net software to demonstrate a simple net that could distinguish between three bit patterns—which he later pointed out looked quite similar to the letters T, I and C. When he told us afterwards that a neural net could be trained to test for almost anything, I believed him.

Are you suggesting that I used "begs the question" incorrectly? If so, I think what I wrote must have been unclear.

All Eliza did was analyze what you write and produce canned responses using programmed patterns and substitutions. This, however, begs an interesting question. How is this behavior different from what you and I do on a larger scale?

The assertion that Eliza only uses canned techniques really does assume that this is fundamentally different from what our own brain does. Therefore it's "begging that question".

Or were you meaning something else and I missed it entirely?

Traditionally, the term "begging the question" refers to the logical fallacy I linked—basically, including an unproved statement in your "proof". The more recent use of "begging the question" to mean "makes us want to ask" is incorrect but annoyingly widespread.
This is actually to agree with you. To me it doesn't seem like you're begging the question. I can see, however, how someone could take what you said and think you were begging the question.

I don't agree with the notion that saying Eliza is not sentient is inherently begging the quesiton. Eliza parses a sentence, uses canned patterns and creates an output. I can't really see it working much different than an command line of sorts (maybe a complex one).

Dolphins have an uncanny ability to create tricks that traniners have not taught them. While the urge to create a new trick may be canned (i.e. instinctual) the product is certainly not canned (i.e. numerous dolphins don't invent the same tricks). As far as I'm concerned, that kind of evidence only strengthens the claim that humans somehow work fundamentally different than Eliza since our type of intelligence can be tested and observed in something non-human.
The brain is a tool, basically, designed for storing and processing the information fed to it by organic information gathering tools. Computers work in pretty much the same way, only relying on organic beings to do the gathering for them. I can see no reason for discounting that computers can learn like human brains in a limited context ... the question is ... do they get any pleasure out of the process of collecting the info (which speaks to motivation) and can they chose to do so or not ... oh, and can they make bad puns? I personally think intelligence is pretty obvious, humanity is a bit harder to define.
*LAUGH!* I haven't thought of Eliza in a LONG time. Back on a New Orleans based MUCK we used to run (it's still up, we just don't do much 'running' to it anymore) in the early 90s, we programmed a stripper in a New Orleans bar using a very similar AI program. I used to LOVE to watch people 'talk' to this other 'character' and see how many actually figured out it was a bot and not just a really weird person.
I can see why people were confused by talking to Eliza; it's rather like chatting with a woo-woo new age person who is really into exploring the meanings behind every statement. I half expected her to recommend some sort of crystal therapy or energy healing exercise.
(feeling a wee bit snarky this morning)
When I read that bit from Eliza, I actually had a very specific person in mind who Eliza reminded me of. I think I should keep quiet on the off chance that someone reading this happens to know that person.
I don't know why I didn't ask you before but how does the back-prop algorithm work?

I've seen mathematical examples of how it works but my mind doesn't naturally think mathematics and mathematics generally obfuscates what is really going on for me. Got a non-mathematical way of explaining it to me?

I basically understand feed forward back prop algorithms except for that last back-prop portion.
okay to get this straight...if I made a neural net where the output was supposed to be 1 and I got 0 instead, I'd adjust all the weights of my "hidden to output" neurons by -1 (the error being a difference of 1)? Then take the error of the "hidden to output" (say it's a difference of 0.5) and apply a -0.5 to the "hidden to input" neurons?

Close. Actually, the weights for the synapses of each layer are set to the expected neuron output minus the actual neuron output multiplied by the derivative of the activation function.

From the actual code:

        network.error.output[out] 
            = (network.neuron.target[out] - network.neuron.output[out]) 
              * sigmoid_derivative(network.neuron.output[out]);
(Anonymous)
sweet! I got it now. Yeah, derivative of course...I just forgot about it. I was really just missing the network.neuron.target[out] - network.neuron.output[out] portion.

Thanks! :)
If you really want insight into consciousness, sentience, etc.
... then you should read Stephen Pinker's How the Mind Works and Blank Slate. Marvin Minsky's classic The Society of Mind is also worth reading.
I think some of the thing is the computer can compute and can be programed to be logical but it doesn't have the ability to use the other senses that human beings and other animals use on a daily basis....These other senses bring our conciuousness to a level that is hard to duplicate....Try asking a computer what a pear tastes like or what images it gets when it bites into an apple....or what the first thought is when it smellls cinnamon buns....