Phil and Stephen discuss new research that shows how people respond differently to robots based on what the robots say to them. Is this proof that humans are easily manipulated by robots or does it offer hope that, when the time comes, people will treat sentient robots right?
In roughly half of experiments, the robot protested, telling participants it was afraid of the dark and even begging: “No! Please do not switch me off!” When this happened, the human volunteers were likely to refuse to turn the bot off. Of the 43 volunteers who heard Nao’s pleas, 13 refused. And the remaining 30 took, on average, twice as long to comply compared to those who did not not hear the desperate cries at all. (Just imagine that scene from The Good Place for reference.)
What are the potential downsides to this finding?
People offering empathy where it is not needed. Can robots be used to manipulate people to do things they otherwise wouldn’t do?
What potential upsides do we see?
When in doubt, empathy is maybe not a bad response. Better to apply it to things that don’t need it than to withhold it from those who do?
Also, is this experiment kind of the reverse of the famous Milgram Experiment? In this case, it looks like the test subjects refused to obey orders that violated their consciences!
Eternity Kevin MacLeod (incompetech.com) | Licensed under Creative Commons: By Attribution 3.0 License | http://creativecommons.org/licenses/by/3.0
Videos and Images from Pixabay.com and other sources.