‘It’s a Trap’: In an Emergency, Do Not Believe the Evacuation Robot
But a new study examines the opposite problem of how people may trust robots even when the machines make obvious mistakes during emergencies.
These experiments are the first ones to focus on human-robot trust in a real emergency situation and will be presented at the International Conference on Human-Robot Interaction, which will take place next week in New Zealand.
In a report published by the Telegraph News, “People asked to find their way out of a burning building overwhelmingly elected to follow an “emergency guide robot” along a previously unknown route instead of taking the exit they entered through”.
Strangely, in non-emergency situations in another study, humans didn’t follow a robot whose past performance was less than prime. Or is it our own errors that fuel this notion that a machine is more trustworthy than our mortal selves despite the fact that a robot needs to be programmed by humans.
Researchers from Georgia Tech, backed by money from the Air Force, ran a test to see if people trying to escape from a high-rise building would trust a robot to lead them.
The robot sometimes led participants to the wrong room, where it did a couple of circles before exiting.
In other cases, the robot – controlled by a hidden researcher – would occasionally stop moving, and an experimenter would inform the participants that it had broken down. A mock building fire was simulated, including artificial smoke, alarms and so on, making the subjects believe they were actually in danger.
A recent study performed by engineers at the Georgia Tech Research Institute resulted in shocking conclusions that proves that human beings are amazingly trusting of robots and other man-made contraptions that are supposedly made to exceed human capability. Why wouldn’t you trust one to get you to safety?
Wagner said that in their experiment, the volunteers have followed the robot’s directions to a level, where it could have put them in danger if it would have been a real emergency.
Researchers even manufactured a moment with the participants before the experiment even began: The robot was meant to lead them to a conference room but behaved erratically along the way. They can drive vehicles, open doors and more. In those cases, a researcher told the volunteers that the robot had broken down.
They’ve proven they can build houses and cars and vacuum our homes, so the possibilities must be endless when talking about a robot’s ability.
“We absolutely didn’t expect this”, he says.
“These are just the type of human-robot experiments that we as roboticists should be investigating”, said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. Some people followed the robot even when it led them toward a dark room blocked by furniture. If a robot carried a sign saying it was a ‘child-care robot, ‘ would people leave their babies with it?
It should probably be noted that the humans were indeed wary and distrustful of the erring robots when the tests were conducted outside a high-stress emergency situation.
In the study, humans were willing to trust robots even when it was clearly counterintuitive to do so.