Robot Displays Self-Awareness in Logic Puzzle
Kagan Pittman posted on July 27, 2015 | 14598 views

Some people thought it could never be done, but it’s official. Robots have achieved a form of basic self-awareness.

Researchers at the Rensselaer Polytechnic Institute AI and Reasoning Lab lined up a trio of Nao robots, programmed with a proprietary algorithm, to play the King’s Wise Men inductive logic puzzle.

Here is the puzzle’s scenario: A King invites three wise men to his court and has hats placed on each of them. The wise men can see the color of the other men’s hats, but not the color of their own. The hats are either blue or white. The King ensures the wise men that at least one of them is wearing a blue hat. The king announces that whichever man stood up and correctly declared the colour of his own hat first would become his new advisor. (Solution)

In the context of the robots, the scenario was a bit different. Instead of hats, the robots were told two of the three of them were given a “dumbing pill,” which disabled their voices, while the third was given a placebo. In reality, the researchers had pressed a button on their heads to disable their voices.

Once asked to determine whether they had been given the dumbing pill or the placebo, things got interesting.

After a few moments of silence, one robot stood up and declared out-loud, “I don’t know.” It then corrected itself saying, “Sorry, I know now. I was able to prove that I was not given the dumbing pill.”

In solving the puzzle, the robot demonstrated an awareness of the puzzles rules and the ability to recognize its own voice and distinguished itself as a separate entity from the other robots.

This was all made possible from the proprietary algorithm called Deontic Cognitive Event Calculus (DCEC). This multi-sorted quantified modal logic program allows the robots to carry out a form of reasoning by building on the first-order Event Calculus (EC).

“EC has been used quite successfully in modelling a wide range of phenomena, from those that are purely physical to narratives expressed in natural-language stories,” reads the Rensselaer AI and Reasoning Lab (RAIR) website concerning the program. “The EC is also a natural platform to capture natural-language semantics, especially that of tense.”

EC’s weakness however is in that it cannot emulate concepts such as knowledge and belief without resulting in inconsistencies.

The video above was shot by Selmer Bringsjord, chair of the department of cognitive science at the Rensselaer Polytechnic Institute.

Bringsjord will present his findings at the IEEE symposium on robot and human interactive communication, RO-MAN 2015, held in Kobe, Japan from Aug. 31 to Sept. 4.

To learn more about the AI and Reasoning Lab and their work, visit

Recommended For You