Saturday November 01, 2014




Passing ‘Turing Test’ may be a moot point

Comments

A couple of week’s ago, headlines in science journals and mainstream media alike, touted a remarkable breakthrough in artificial intelligence (AI). For the first time ever a computer had passed the “Turing Test.”

Of course, these kinds of claims are always met with skepticism and controversy. In this case, not necessarily because it didn’t happen (although there has been plenty of criticism about that as well), but whether it actually means anything significant.

Alan Turing was a British mathematician and a pioneering computer scientist perhaps best known for cracking the Nazi’s World War II “Enigma” code.

He was awarded for that groundbreaking wartime achievement with conviction for homosexuality in 1952, which ultimately contributed to his suicide, but that is just an aside.

In the early 1950s, Turing proposed his “Imitation Game,” in which a human interrogator talks with two entities in two separate locked rooms, one a computer and one a human. The interrogator tries to determine which is which. Turing believed researchers’ goal should be to develop computers that are linguistically indistinguishable from people.

He predicted that by the year 2000, computers would be able to fool a human judge 30 per cent of the time.

During the first week of June, organizers of an event in London announced a chatbot dubbed Eugene Goostman posing as a 13-year-old boy from Odessa Ukraine, managed to dupe one out of three judges into thinking it was human.

Some disputed that the test was not fair because Goostman was posing as a non-English speaking teen and thus poor answers to questions or misinterpreting questions, was understandable.

Others question the validity based on the low number of judges.

The real problem here though is in reporting that this was some kind of advance in teaching computers to think. It is still a brute force attack on the problem. Goostman was not applying knowledge and consciousness to answering, but merely searching a vast database and applying algorithms to mimic a suitable response.

Even the developer of the Goostman program admits it was not a leap forward in artificial intelligence.

“The program simply imitates a human, it doesn’t answer trick questions,” said Vladimir Veselov, as reported in the Wall Street Journal’s Digits blog.

The same article quoted Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex as saying the claim this bot passed the Turing Test is “a major overstatement which does grave disservice” to artificial intelligence.

In proposing his test, Turing was kind of dodging the question of whether computers would be able to “think.”

In that sense, Goostman could be said to have passed, but it begs the fundamental question whether there is truly any value in pursuing such tests.


Comments

Comments


NOTE: To post a comment in the new commenting system you must have an account with at least one of the following services: Disqus, Facebook, Twitter, Yahoo, OpenID. You may then login using your account credentials for that service. If you do not already have an account you may register a new profile with Disqus by first clicking the "Post as" button and then the link: "Don't have one? Register a new profile".

The Yorkton This Week welcomes your opinions and comments. We do not allow personal attacks, offensive language or unsubstantiated allegations. We reserve the right to edit comments for length, style, legality and taste and reproduce them in print, electronic or otherwise. For further information, please contact the editor or publisher, or see our Terms and Conditions.

blog comments powered by Disqus


Quick Vote

Survey results are meant for general information only, and are not based on recognised statistical methods.


Markets





LOG IN



Lost your password?