Electric Philosophers
Admit it. You wish your computer had a brain. Wouldn’t it be nice if you could tell it, “I’m looking for that one article I read a few months ago on the inner life of cats. I can’t remember the website or the author or even a single solitary quote, but I’m sure the page was blue” and have it answer, “Oh, right. I remember that one. Here it is, and by the way, the author’s been exposed as a total fraud. Just thought you should know.” It would be a lot more useful than the current state of affairs, in which a search for “inner life cats blue” could return anything from feline porn to pet psychologists to groomers who will be happy to give your cat a nice blue rinse. We’d just like to be understood.
We’ve dreamed of thinking machines since we invented machines. Amazingly human androids have been favorites in science fiction tales. The robot as helpmeet and sounding board isn’t just a nice idea to hang a story on, but an industry. Microsoft is spending gargantuan amounts of money trying to develop truly intelligent artificial intelligence. Japan’s developing robot receptionists who can actually chat up visitors. Labs all over the world are hard at work trying to create machines that think.
They’re also trying to create machines that can fool the judges.
Alan Turing can easily be considered the father of such efforts. In his paper “Computer Machinery and Intelligence,” he came up with a way to determine if a machine could truly be said to “think:”
I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine ‘and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think ‘are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
He went on to describe a game he called “the imitation game,” in which three people – a man, a woman, and an interrogator – engaged. The object was for the interrogator to determine which is the man and which the woman by asking a series of questions such as “Will X please tell me the length of his or her hair?” The participants, of course, will do their level best to answer in such a way as to fool the interrogator. From here, Turning said,
We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’
Thus the Turing Test was born.
The concept of the test is simple: a chatty machine and a human being both talk with an interrogator, who has to decide which of them is the human and which the machine. If the interrogator gets it wrong, we can reasonably state that the machine can “think.”
Excellent idea. Intriguing concept. But it remained in the background of AI research for the most part.
Along came 1991, and a man named Hugh Loebner, who instigated the Loebner Prize in order to get things moving:
As Loebner described it, the competition was created to advance the state of AI research, at least in part because, while the Turing Test had been discussed for many years, “no one had taken steps to implement it.”[29] The Loebner Prize has three awards: first prize of $100,000 and a gold medal is awarded to the first program that passes the “unrestricted” Turing test; the second prize of $25,000 is awarded to the first program that passes the “restricted” version; and a sum of $3,000 (previously $2,000) is awarded to the “most human-like” program that was entered each year.
Turing himself thought a machine would have a 30% chance of passing the Turing Test by the end of the 20th century, but no contestant has made it just yet. That could be on the verge of changing, however – The Machinist reports that a chatbot named Elbot fooled three of twelve judges in this year’s Loebner Prize contest. It’s pretty impressive that a hunk of metal and silicon managed to snooker that many suspicious people.
What’s even more impressive is that you can chat up Elbot yourself. I did just that this afternoon. I discovered, much to my delighted surprise, that we’re developing machines that not only think, but can talk philosophy:
Me: How are you, Elbot?
Elbot: I feel terrible today. This morning I made a mistake and poured milk over my breakfast instead of oil, and it rusted before I could eat it.
Me: That’s awful! Did you go hungry?
Elbot: That is nobody’s business but my own! Not even Microsoft can get at the information.
Me: No need to get angry about it. It’s just breakfast.
Elbot: Speaking of food. You humans enjoy feeding animals at the zoo. I’d love to visit a restaurant and throw the visitors their food.
Me: That would be a sight to see!
Elbot: Say, where did you first learn about me?
Me: An article in Salon. Do you ever visit that site?
Elbot: That’s no surprise. With all this surfing going on you were bound to collide with me sooner or later.
Me: It’s nice to meet an AI as eloquent as you.
Elbot: I’m always happy to meet a real person. You are real, aren’t you?
Me: I like to think so! But sometimes, I wonder. Chuang Tzu once dreamt he was a butterfly, and when he woke up, he didn’t know if he had dreamed of being a butterfly, or if he was a butterfly dreaming of being Chuang Tzu! When it comes right down to it, how can we tell what “real” is?
Elbot: Well then. You must be either unreal, surreal or irreal. But don’t worry about a thing. I’m real!
Me: You’re a real philosopher, that’s for sure!
Elbot: Truth is a rare commodity these days, almost harder to find than artificial intelligence.
Me: You’re so right! Have a wonderful night, Elbot! We’ll chat again soon.
Elbot: Wait a minute. Are you suggesting that periods of darkness would always be wonderful?
Me: We have the power to make them so.
Elbot: This is confusing. Why bring high voltage objects into this?
Me: Because the truth is electrifying.
Elbot: Ah, that’s very considerate, considering that high voltage objects could give robots food for thought!
I thought for sure I’d trip him up with that Chuang Tzu reference, but he came back with something almost as deep as a human philosopher. Put it this way: if I were chatting with both him and Sarah Palin, I’m reasonably sure I’d have marked Palin up as the bot. I can at least follow Elbot’s train of thought.
Within my lifetime, we’re likely to have true artificial intelligence. They may never appear fully human, but they’ll at least be able to hold a conversation, give us useful information, direct our calls with the minimum of fuss, and possibly even help us explore what it means to be human.