How To Tell If A Machine Is Intelligent
The Turing Test is fine, with a few tweaks
In 1950, Alan Turing proposed an “imitation game” to assess whether a machine is intelligent. In this “Turing Test,” an interrogator asks the machine a series of questions. If the machine’s replies closely resemble (imitate) those of a human, the machine would be deemed (by the interrogator) to be “intelligent.”
One problem with the Turing Test is that most interrogators are easily fooled. They give the machine the benefit of the doubt, because we humans have a strong tendency to anthropomorphize and assign human agency to anything that speaks or acts non-randomly. So if a naïve interrogator asks a bland question like “How are you today?” and the machine replies “A bit under the weather, but thanks for asking,” the interrogator might easily mistake this exchange for actual intelligence when it’s really just a boilerplate reply.
Also, most academics mistakenly assume that intelligence is equivalent to solving logical puzzles or winning games like chess. Worse, they engage in meaningless debates about consciousness and “qualia” and “Chinese rooms” instead of trying to solve intelligence itself.
In the real world, most intelligent human speech is used to:
- convey information having complex dependencies, such as gossip
- convince others to support our goals (passions, agendas, drives, motivations, etc.)
To fix the Turing Test, we need to train the interrogator to probe these capabilities effectively. The machine’s job should be to convince the interrogator — i.e., provide credible evidence — that it’s intelligent, not the other way around.
To recognize whether the machine can understand complex dependencies, the interrogator should ask questions like:
I met a guy, and he seemed really nice. He wasn’t wearing a ring, but he had a tan line on his finger, and he kept touching it as if a ring was there. I don’t want to date a married guy. What do you think I should do?
Any reply from the machine that tries to distract, change the subject, feign ignorance, or otherwise avoid answering the question, should simply be tossed out by the interrogator.
To determine whether the machine has complex goals, the interrogator need do nothing. Again, it’s up to the machine to make its own case. At very least, the machine should have as its primary goal to convince the interrogator that it’s intelligent! A secondary goal should be to strive to be understood — and even liked! — by the interrogator.
That will require the machine to have a “mental model” (or theory of mind) to represent the gap between what (it believes) the interrogator already knows and doesn’t know, as well as what arguments were tried and succeeded or failed.
Like the Ava — the robot in the movie Ex Machina — the key to intelligence is being able to convince others to believe, to win them to your side, and convert their skepticism to acceptance.