Taking robots at face value
Assoc Prof Xu Hong's research seeks to understand humans’ emotional and facial responses to robots.
By Xu Hong and her team
With more smart city solutions being developed, our social interactions have gone beyond people to include machines, such as robots in the workplace like those in the hospitality sector, AI-powered virtual assistants with three-dimensional (3D)-animated faces, and automated driver-assistance systems in vehicles.
These encounters prompt questions about humans’ underlying perceptual mechanisms at play: How do we perceive and react to different types of faces? What factors influence our sense of trust or unease when we see an artificial face? How do our facial expressions reflect our distrust of automated systems?
To explore these questions, our team conducted studies that delve into the intricacies of visual processing and trust in faces and automation. Our findings offer insights into designing automated systems, as well as the faces of robots and digital characters, that inspire trust and are perceived positively.
THAT EERIE FEELING
With the rise of AI in modern society, some may wonder if humanoid robots will eventually live among us. How ready are we, as humans, to accept such robots?
One theory argues that our affinity for entities with human traits, such as androids, increases as they become more human-like in appearance. However, we are revulsed when these entities appear almost, but not quite fully, human. This revulsion, known as the uncanny valley effect, explains why some people find realistic-looking animatronics, cyborgs and mannequins eerie.
Our team explored the mechanisms behind the uncanny valley effect – a theory first described by robotics expert Masahiro Mori in 1970 that has been long debated by roboticists and scientists alike – by studying the level of visual processing required for a person to experience this phenomenon.
In a series of experiments, 111 participants were shown 91 faces that ranged from completely robotic to actual human for either 50 milliseconds (brief exposure) or three seconds (longer exposure). We asked the participants to rate each face based on how eerie they perceived it to be.
We showed participants 91 faces that vary on the robot-human spectrum, from completely robotic (left) to almost human to actual human (right), for either a brief moment or a longer period. We then asked them to rate these faces based on their perceived eeriness. Credit: Pexels; Telenoid – Osaka University and ATR Hiroshi Laboratories; ITU/R.Farrell on Flickr/AI for GOOD Global Summit/CC BY 2.0.
We found that the uncanny valley effect experienced by participants was similar in both brief and longer exposure conditions. This suggests that when making sense of what they see, people consider artificial faces as eerie early in the process even if the faces are shown briefly. This feeling of unease seems to develop almost instinctively and instantaneously.
These findings shed light on how robots, androids, virtual characters and 3D AI assistants can be designed to appear less unsettling. An artificial face that has either a natural human-like appearance or a more robotic appearance is more likely to be positively perceived, and does not fall into the pitfall of the uncanny valley.
THE FACE OF TRUST
Another factor to address in human-robot interactions is users’ trust in automation technologies, such as collision detection systems in vehicles. People need to trust machines and systems in order to effectively use them.
One way to gauge users’ trust in automation is by studying their facial expressions, which are made up of different movements in the muscles of the face. For example, to form a smile, we lift our cheek muscles – which creates small wrinkles around our eyes – and pull up the muscles at the corners of our lips. Together, these facial movements express happiness.
Our team sought to determine if specific facial expressions and movements could reliably indicate distrust. We examined various facial expressions of distrust and identified several combinations of ffacial muscle movements. These combinations were compared to the muscle movements involved in expressing the six basic emotions: anger, disgust, fear, happiness, sadness and surprise.
We found two main expressions of distrust indicated largely by the movements of the eyebrows and eyes. These two expressions are different from those of the six basic emotions based on their unique combinations of facial muscle movements.
This discovery suggests the potential of analysing facial muscle movements to detect distrust in real-time, which could be useful in developing automated systems that need high levels of users’ trust.
For instance, users of an autonomous driving system have varying levels of trust in it. If the system’s driving behaviour, such as for lane changing, is too aggressive, some drivers might become uncomfortable with the system and even distrust it. This can significantly impact the user experience and the driver’s ability to operate the vehicle safely.
However, if the system is able to detect the user’s trust level from facial expressions, this data could be used to improve and adjust the system’s driving style to boost the driver’s trust and acceptance.
BETTER HUMAN-ROBOT INTERACTIONS
These two studies highlight the complexity and nuances of how people perceive artificial faces and automated systems. They underscore the importance of considering people’s visual and emotional responses when designing artificial entities and systems.
As we continue exploring how people understand faces, we aim to contribute towards creating more effective and empathetic human-robot interactions to improve users’ experiences and how advanced technologies are perceived and accepted in society.
---
Assoc Prof Xu Hong from NTU’s School of Social Sciences is a psychology researcher who studies the neural mechanisms of how people perceive what they see, as well as their applications in real life and human-centric artificial intelligence (AI) systems.
Details of this research can be found in Heliyon (2024), DOI: 10.1016/j.heliyon.2024.e27977; and Human Interaction, Emerging Technologies and Future Systems V (2022), DOI: 10.1007/978-3-030-85540-6_34.
The article appeared first in NTU's research and innovation magazine Pushing Frontiers (issue #24, October 2024).