Roboticists should never look at their creations in the same way again

See allHide authors and affiliations

Science Robotics  26 Aug 2020:
Vol. 5, Issue 45, eabd2616
DOI: 10.1126/scirobotics.abd2616


Little Eyes by Samanta Schweblin offers a thought experiment in human-robot interaction.

Little Eyes by Samanta Schweblin (1) is a new book that poses a far too realistic description of a robot toy and its impact on various owners. Entertaining, horrifying, and always provocative, the book serves as a thought-experiment on how the use, as well as abuse, of social media might extend to robotics. It should be required reading for every roboticist and government regulator.

The book centers around a little robot toy that is rapidly becoming the latest worldwide fad. Called a kentuki, the inexpensive doll-like robot comes in one of several animal shapes mounted on a mobile base. When a person, called a keeper, buys a kentuki, the toy is randomly paired over the Internet with an anonymous person, called a dweller, who has purchased an encrypted tablet controller. The dweller teleoperates the robot using the view from kentuki’s camera, hence the title Little Eyes. The tablet supplies a natural language translation of the keeper’s voice, but the dweller can only use non-verbal, physical forms of expression, much like a dog, a cat, or R2D2 without the beeps. The entertainment value of the kentuki is in each pair of people trying to create a social relationship under the constraints imposed by the robot: A keeper wonders, “why is my dragon doing that?”, while the dweller is thinking “I’ve never seen snow before, can I move closer to the window for a better look?”

Little Eyes uses the kentuki to explore how people literally navigate their relationships with others, but along the way, it illustrates known hurdles to good human-robot interaction (HRI) (2). One barrier is that teleoperation induces perceptual and cognitive deficits. Dwellers are embodied in a robot without knowing the shape of body or having proprioceptive feedback. The dweller can see only through the keyhole view of the camera, and the small body moving low on the floor distorts the normal human eye-level perspective. It is no surprise that dwellers continually crash the robot, get it stuck on furniture, or become angry at the difficulties of moving about in the world, just like health professionals using teleoperated robots to treat patients with COVID-19 are experiencing (3). A second impediment is that there is no guarantee that the mental models formed by the keepers and dwellers of each other will be correct. Keepers are constantly being misdirected by the robot’s toy shape, ambiguous movements, and lack of verbal communication to think of the kentuki as a social agent, an autonomous entity, with limited intelligence. However, the kentuki is not an autonomous agent; it is a social medium for the dweller (4). Forgetting the difference leads more than one keeper to voice secrets better left unsaid. The dweller is also working at a disadvantage for understanding the keeper, because the kentuki sees the keeper for only a few hours each day and cannot the dialog with the keeper’s family and friends. It is like trying to guess the plot of a foreign language movie with portions missing and captioning only for the main character; mistakes are inevitable.

In theory, the anonymous teleoperation and non-reciprocal communication should prevent violations of consumer privacy, but Little Eyes revels in how easily advertised privacy protections can be circumvented. Despite the random pairing, there is nothing to stop someone from activating a tablet, assessing the dweller and surroundings, and then reselling it. Do you want to live in a rich person’s house? Would you like to experience life in an impoverished neighborhood? Practice your French? How much would you pay to be kept by a 10-year-old boy so that you can make videos as he undresses and takes a bath? And how would that boy know whether the kentuki was showing affection or perversion? Keepers and dwellers alike muse that regulations are needed; yet, following the book’s astute reflection on human nature, they continue to play with their kentuki, assuming that bad things will not happen to them personally. In a clever twist, Little Eyes challenges the initial reaction that more privacy should be required; in one subplot, a dweller who is witnessing a keeper being kidnapped cannot call the police because the name and location are private.

Little Eyes sidesteps any discussion of the culpability of the unnamed robot designers for misuse and misery. After all, the designers have made a product that does exactly what it says it will do. It is easy to imagine a back story where corporate lawyers advised the company that inevitable charges of negligence and product liability would be settled out of court—with enough delays for the fad to have run its course. The backstory would end with the engineers pocketing the money, shrugging, and saying, “we didn’t know.” However, of course, we all know that bad things are possible with the kentuki. The drama and suspense in the book is predicated on an ordinary, non-technical reader actively second-guessing where the problems will arise and whether the characters can reach a happy ending. The drama and suspense should be even more disturbing to us as roboticists, as we have access to 20 years of HRI research capturing how people misunderstand and misuse robots.

Little Eyes is not the first work of fiction to raise alarms about the potential consequences of robotics, but it may be the most pragmatic, given that it describes the impact on individuals, not society as a whole. Ethics in robotics tends to focus on lethal autonomous weapons, trustworthy algorithms, and ethical thought experiments such as the Trolley Problem; perhaps, the discussion should expand to include the following: Is it acceptable to create robots that ignore or deliberately manipulate human vulnerabilities? Is it ethical to exploit the lag in regulations to take shortcuts in gaining market share? And will we as a community act to incorporate the lessons of HRI into professional standards, coursework, and government policy? Little Eyes see us, and we roboticists should never look at our creations in the same way again.

Little Eyes

Samanta Schweblin Oneworld Publications, 2020. 256 pp.


View Abstract

Stay Connected to Science Robotics

Navigate This Article