A recent study from the Netherlands suggests that the more similar robots are to humans the more threatening they may be perceived.
The robot characters of Star Wars, like the humanoid C-3PO and the overturned-trash-can-with-legs shaped R2-D2, could not be mistaken for humans, yet we likely attribute some human traits to them. Immersed in the film we align our allegiance with these mechanical helpers, wincing when R2-D2 is hit by an enemy blaster or feeling for a distressed C-3PO trapped in a Jawa sandcrawler. But we don’t feel uneasy about them like we might with a humanoid robot that is almost indistinguishable from a person – a phenomenon dubbed the uncanny valley.
Research into social robotics suggests that the more human we try to make robots the more we can have negative feelings of eriness, danger, and threat. As the distinction between human and machine closes, we get nervous! Not a good impression for robots designed to help us in close proximity.
Barbara Müller and colleagues from the Netherlands wanted to explore some the factors that elicit negative feelings about human-like machines and if this was indeed a significant phenomenon. At the core of our unease is our natural inclination to attribute human traits to non-human things, as if those things think and feel like we do – it’s called anthropomorphising. We often do this with our pets, even attributing our own psychopathologies upon them – “I know my cat is a little narcissistic and on the autism spectrum,” for example.
Our tendency to anthropomorphise is likely a way our brains assesses things that look like, or act like, a human and respond in kind. When we anthropomorphise machines we interact with them in a more interpersonal way, with more moral care, and attribute human stereotypes to them.
When a machine starts to look very much like a human, we can get ‘creeped out’ by it’s human like presentation, movements and responses. Our brains are anticipating a human, but when the robot responds in slightly mechanical ways the violation of our anticipation causes a prediction error leading to unease or fear. This is our decent into the uncanny valley.
Researchers have an alternative explanation for the uncanny valley phenomenon that revolves around a feeling that our uniqueness as humans is being violated by machines that closely mimic us. This violation of our need for distinctiveness became one of the key assumptions underpinning the study by Müller, et al. A key feature we use to distinguish humans from non-humans is mind attribution. In other words we attribute the capacity to feel emotions (experience) and the capacity to be autonomous (agency) when assessing something as human or not.
The researchers showed participants images of various robots with differing levels of human-like features to gauge the human-machine distinctiveness based on anthropomorphic appearance and perceived mind attribution. Their study validated the notion that the more human-like a robot, and the more agency we perceive of the machine, the greater the perceived damage to our human identity. When the boundaries between humans and machines get blurry, we get nervous.
This is, admittedly, very basic research using only images and university students, most of whom were female. Doing similar studies with real robots and a more diverse mix of subjects would yield much more telling results. If you have ever been in the presence of a humanoid robot responding to you in a somewhat human way it’s a very different experience than just seeing a photo. In this respect Sophia comes to mind; an ultra-realistic humanoid robot created by Hanson Robotics. For me Sophia gives me a little of the uncanny valley feeling, initially at least.
I guess as these type of human-like machines become even more realistic, and our brains attribute much greater experience and agency to them, maybe we climb out of the uncanny valley on the other side. Maybe into a relationship with machines that is as natural as relating to other humans.
However there is something about the “human–machine distinctiveness, and perceived damage for humans and their identity” theory that may not go away at all, no matter how like us the machines become. I guess that’s where we make reference to that other great science fiction film Blade Runner – there will always be “us” and “them”. Those outside the clan, according to some deeply primitive intuition inside us, are potentially dangerous. Maybe Star Wars with it’s not-very-human humanoid robots got it right? Maybe we need to stay away from trying to replicate ourselves and let machines be obviously machines.