(NEW YORK) How would you react if a robot accused you of cheating at rock-paper-scissors?
According to roboticists, you'd prefer a cheating robot to a non-cheating one, say researchers at the 2016 RO-MAN Conference, held at the Columbia Teacher’s College this past weekend. The conference, put on by the Institute of Electronics and Engineers, brings together top researchers and roboticists to discuss the present and future of robotics and interactions with their human counterparts. This year, the 25th anniversary of the conference, hundreds of researchers gathered for six days of workshops, paper presentations and discussions. The theme was Human-Robot Collaboration, and many of the discussions centered on Humanoid Robots-robots that are made to resemble humans.
Think C-3PO, just not as shiny.
Roboticists have faced trouble in recent years getting people to like their robots, rather than seeing them as slightly creepy and well, robotic. As robots move from labs and manufacturing into our homes and workplaces, scientists want to make sure that we’re comfortable with them.
In an attempt to lower the creepy factor, a team led by Mriganka Biswas and Professor John C. Murray at the University of Lincoln in the United Kingdom brought in participants to play rock-paper-scissors with their robot, Multi-Actuated Robotic Companion, or MARC. MARC is a 3D printed humanoid robot with no lower body. Biswas and Murray theorized that people might like MARC more if he were imperfect, like his human counterparts. The idea was that MARC would read people’s facial expressions over time and “learn” from them, what roboticists refer to as “social learning.” They wanted MARC to learn some distinctly human behaviors-“overconfidence, lack of knowledge, ignorance and sometimes persistence.” This, they theorized, would make MARC more relatable.
Researchers also wanted MARC to do something else that humans tend to do-that is, attribute successes to our own brilliance but blame others for our failures. Researchers call this the self-serving bias. Biswas and Murray wrote an algorithm such that MARC would congratulate itself when it won and be more confident in the next round, or if it lost, it would blame it on “external causes-the robot was not ready, the robot was looking somewhere else, or the participant cheated.” The participants could argue with MARC, but then it would accuse them of cheating.
Biswas and Murray found that people actually prefer bragging robots who accuse them of cheating to those who play fair - in fact, participants found MARC’s bragging “hilarious.” Imperfections are an important part of being human, and by making our social robots flawed, notes Dr. John Murray, “there’s a little bit more humanity to them. The aim is to allow people to relate more easily to social robots. We want to know how people build trust levels with the robots over weeks and months.”