Alan Turing in slate at Bletchley Park. Courtesy of Jon CallasAlan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma machine cypher during WWII  but that is far from being his only legacy. 

In the year of the 100th anniversary of his birth, researchers published a series of ‘Turing tests’ in the Journal of Experimental & Theoretical Artificial Intelligence; these entailed a series of five-minute conversations between human and machine or human and human. Judges were tasked with identifying whether who they were talking to was human or a computer. Can machines be successful in ‘being human’ in real conversations? The resultant transcripts presented in this paper reveal fascinating insights into human interactions and our understanding of artificial intelligence.

In 12 out of 13 tests the judge wrongly identified the interlocutor as machine when in fact they were human. Turing tests were designed to study machine ‘thinking’ through language and ultimately establish if a machine could foil an interrogator into believing it were genuinely human. So, why in this case did so many believe the reverse? 

The cursory conversations were quite one-dimensional, for example:

Judge: Do you like cooking?
Entity: no you?
Judge: Yes. Do you like eating?
Entity: yes!
Judge: What is you fav meal of all time?
Entity: i dont know there are so many?
Judge: Give me one then
Entity: pizza you?

Did such mundane talk give the impression of being machine-generated? Other transcripts revealed humor, geographical and historical knowledge, a lack of general knowledge, evasion, misunderstanding, dominance and use of slang. All of these are traits associated with humanity but, in these instances, seemed to offset the decision-making process, leading the judge to the wrong choice. This novel research in Turing tests shows that humans are not always able to recognize what is very typically human, let alone artificial intelligence.

In 1950 Turing asked “Can machines think?” The author quotes “to ‘think’ merely means ‘to be of the opinion’ or to ‘judge’, which indeed the judges were... As a result, we can conclude that thinking does not require understanding or specific knowledge, although in the human case both facilities are likely to help”.

Citation: ‘Human misidentification in Turing tests’ by Kevin Warwick & Huma Shah
Journal of Experimental & Theoretical Artificial Intelligence, Published by Taylor & Francis. Read the full article online: