Via mysteriousuniverse.org by Paul Seaburn
Your robot is smoking, moving erratically and making strange noises. If it’s the robot responsible for vacuuming your rug, you’d probably turn it off. If it’s the robot responsible for building safety in your place of work, would you ignore the warning signs and follow its instructions anyway? According to a new study, you would.
In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.
That comment – part of a study to be presented at the upcoming 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand – is from Alan Wagner, a senior research engineer in the Georgia Tech Research Institute’s first-ever study on human-robot trust in an emergency situation. The tests were partially funded by the Air Force Office of Scientific Research (AFOSR).
A group of 42 volunteers were told to follow a secretly-controlled robot with the words “Emergency Guide Robot” on its side. The intended destination was a conference room but the controller made it do strange and erratic actions before getting there. It went to the wrong room, spun in circles and stopped moving.
After it finally got the subjects to the conference room, smoke filled the hallway outside the door. Instead of leading them to an exit door that they were familiar with, the robot pointed them to one they were unsure of in the back of the building. What happened next?
We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency. Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.
That’s right. According to GTRI research engineer Paul Robinette, ALL of the subjects followed the malfunctioning robot! Why?
The researchers believe that the subjects saw the robot as an authority figure and trusted it even when it acted erratically. Roboticists need to consider this carefully, says researcher Ayanna Howard:
We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.
So people will follow a robotic authority figure and trust it even when it acts erratically. It sounds like this robot will have a bright future in politics!
Source
No comments:
Post a Comment