FocusHUMAN-ROBOT INTERACTION

Understanding robots

See allHide authors and affiliations

Science Robotics  30 Sep 2020:
Vol. 5, Issue 46, eabe2987
DOI: 10.1126/scirobotics.abe2987

Abstract

Elucidating the neural and psychological mechanisms underlying people’s interpretation of robot behavior can inform the design of interactive autonomous systems, such as social robots and automated vehicles.

The cognitive sciences, and social neuroscience in particular, have made substantial progress in elucidating the neural and psychological mechanisms underlying human social interaction, especially the interpretation of others’ actions and intentions. Relatively little, however, is known about how people interpret the behavior of interactive autonomous systems, such as social robots and automated vehicles. Arguably, a crucial prerequisite for individual and social trust in such technologies is that people can reliably interpret and anticipate the behavior of autonomous technologies to safely interact with them. Hence, the design of interactive autonomous systems needs to be informed by a thorough understanding of the mechanisms underlying human interaction with such systems. In this issue of Science Robotics, Bossi et al. (1) take a step in that direction, reporting experimental results from a neuroscientific study of people’s interpretations of humanoid robot behavior. Their findings indicate that people’s individual biases toward treating robots as either intentional agents or mechanistic artifacts can be detected and predicted at the neural level. These findings complement related studies where humans attributed intentionality to both humanoid robots (2) and driverless vehicles (3) and also rated the degree of intentionality similar in the human case and the nonhuman cases.

The different attitudes toward robots can be illustrated with the example illustrated in Fig. 1. As a pedestrian encountering a driverless car at a crosswalk, you might be asking yourself: Has that car seen me? Does it understand I want to cross the road? Does it intend to stop for me? This would be an example of the intentional stance (4), i.e., the interpretation of behavior based on the attribution of intentional (directed) mental states, such as beliefs and intentions. Alternatively, you could take a design stance and predict the car’s behavior based on the general assumptions that such vehicles are designed to detect people and not harm them. Although that second strategy might seem more straightforward, note that it would still require you to make additional, more situation-specific assumptions about whether or not the car has actually detected you.

Fig. 1 The intentional stance in action.

Pedestrians interacting with driverless vehicles might ask themselves, “Has that car seen me?”, “Does it understand I want to cross the road?”, or “Is it planning to stop for me?” This involves the attribution of intentional (directed) mental states, such as beliefs (e.g., there is a person on the crosswalk), desires (e.g., not to collide with people), and intentions (e.g., to slow down and let the person cross the road).

CREDIT: A. KITTERMAN/SCIENCE ROBOTICS

To put the work by Bossi and colleagues into a broader perspective, it is worth noting that the lack of intentionality or intentional directedness has played a central role in discussions of the limitations of artificial intelligence (AI) for about 50 years now (57). The detailed philosophical discussion is complex; however, the key points are reflected to some degree in the expectations that people have of AI and related technologies. When, for example, on a hairdresser’s website, a customer’s comment gets mistranslated from Swedish to English by Google Translate as stating that her “son was so pleased with the mowing,” this might be amusing, but probably does not come as a surprise to anybody. Mistranslations like this are common—in this case because the Swedish word for the cutting of hair and grass is the same—and people understand by now that Google Translate does not, in any deep sense, understand the texts it translates. In philosophical terms, the intentionality (aboutness) of the text is derivative or observer relative; i.e., its meaning resides in the human observer’s head.

Robotic systems, on the other hand, usually sense and move in the same physical environments as people. Both robot lawnmowers and driverless cars avoid obstacles more or less reliably and therefore might be argued to have some form of intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. This is obvious to anyone whose robot lawnmower has run over hedgehogs or small toys left on the lawn because they are not equipped to detect them or do not attach the same meaning to them. For more complex systems, the limitations are less obvious. When, for example, the U.S. National Transportation Safety Board released its report on the 2018 accident involving an autonomous car in Tempe, AZ, some people expressed surprise that “Uber’s self-driving car didn’t know pedestrians could jaywalk” (8). This seems to indicate an expectation—probably shared by many people—that driverless cars should have a human-like common sense understanding of human behavior. That might be expecting too much, though.

This brings us to the issues of anthropomorphism and expectation management. Autonomous technologies, such as social robots and automated vehicles, are in many cases easy to interpret in terms of human-like intentionality and mental states, but there is clearly a risk of overly anthropomorphic attributions. The role of anthropomorphism in human-robot interaction is not yet well understood (9), but its role in human interpretations of animal behavior has been studied for a longer time. Urquiza-Haas and Kotrscha (10), for example, presented a model of human interpretation of animal behavior, according to which mechanisms of embodied and social cognition interact depending on phylogenetic (evolutionary) distance and shared morphological and behavioral features. This might also be a useful starting point for understanding the neural and psychological mechanisms underlying human interpretation of interactive autonomous systems. The findings of Bossi et al. (1), if they can be corroborated, could be an important contribution to this endeavor because they constitute a notable step toward understanding when, why, and how people take the intentional stance toward robots. In the long run, this could contribute toward better designs to deal with expectation management. That means, once we know more about the underlying neural and psychological mechanisms, we might be able to better guide users of interactive autonomous systems by encouraging appropriate attributions of intentionality and mental states and discouraging inappropriate ones, thereby reducing unrealistic expectations on such systems.

REFERENCES

Stay Connected to Science Robotics

Navigate This Article