How a computer program didn't QUITE convince people it was a human being

You may have heard over the weekend that a computer bested the Turing test for the first time ever. Here's why that's not entirely accurate.

The Turing test is one of those notions that sits firmly in the "OMG computers are going to take over humanity" part of the collective human consciousness. But when you get right down to it, what Turing wanted to test wasn't "Can machines think?", it was "Can machines imitate human behavior enough to trick someone into thinking a machine can think?"

The science community has been buzzing this weekend because, according to some, a computer program has managed to convince more than 30 percent (the minimum Turing requirement) of subjects tested that it was actually a 13-year-old boy from Ukraine.

Let's be clear -- that claim is not in dispute. We accept that the test in question was performed and that those subjects bought the idea that a chatbot was a real, living human being.

But, digging into the details, we remain dubious that:

  • This is the first example of the test being "beaten"
  • This counts as passing the Turing test at all

First things first. This is absolutely not the only time anyone has ever said a program was able to pass the Turing test. For example, just shy of three years ago, a study was done in India using a more complex version of Cleverbot in which some 1,334 votes were cast judging how human the responses of the program were versus an actual human. In that case, Cleverbot's responses were judged to be 59.3 percent human, whereas the humans themselves only scored a marginally superior 63.3 percent.

As studies go, that sounds pretty conclusive, right? However, if you've ever tried to chat with the Cleverbot available online, it's almost immediately evident that you are speaking with a machine. As for the version used in the study, even then there was some question as to the participants and how familiar they were with chatbots in general.

Now, let's compare that with the new program that people are saying has passed the Turing test. A team based out of Russia created their own chatbot. They told the subjects tested that the bot was Eugene Goostman, a 13-year-old boy from Ukraine. The test fooled 33 percent of the judges at the Royal Society in London.

Says co-creator Vladimir Veselove, "Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality."

That might sound clever, but this detail lies at the heart of our doubts that this take on the Turing test really qualifies.

Think about it -- you tell a group of English-speaking judges that the "person" they are speaking with is from a country where English is not the primary language spoken, and they will adapt their thinking in a significant way. For example, when "Eugene" passed the Turing test, "his" response was "I feel about beating the turing test in quite convenient way. Nothing original." Now, take out the "English as a second language excuse," and it's almost painfully obvious that this is a bot we're speaking with.

What we're saying is, if you have to begin your test with that major a qualifier, you're gaming that test to a degree where the conclusions are almost entirely irrelevant.

Is it interesting? Sure. Did a supercomputer just thoroughly convince most humans that it was just a regular dude? Not even close.

What do you think? Is this a major leap forward for technology, or a test with enough flaws in its premise to take the shine off the announcement?

More from around the web