Robot learning in science fiction

See allHide authors and affiliations

Science Robotics  30 Jan 2019:
Vol. 4, Issue 26, eaaw5283
DOI: 10.1126/scirobotics.aaw5283


We should discard the expectations from science fiction that a robot will become a virtuoso at a new task overnight and that learning leads to sentience.

Much of science fiction has implicitly assumed that robots would learn. In R.U.R., the 1920 play that created the term “robot,” robots were bootstrapped with all of the technical knowledge and skills they needed for work but still needed to learn about love. By 1953, robots in science fiction stories were learning practical skills and were learning them faster and better than humans. These skills ranged from learning to play the piano—literally overnight in Herbert Goldstone’s short story “Virtuoso”—to learning to physically transform into more efficient killing machines in Philip K. Dick’s “Second Variety.” Movies took a bit longer to catch up with robot learning. It was not until 1972 with Silent Running, directed by Douglas Turnbull (Stanley Kubrick’s head of special effects on 2001: A Space Odyssey) that a movie relied on robot learning as a major plot point. The protagonist expands a maintenance robot’s skill set by teaching it through repetitive demonstration on how to take care of the last forests, now relegated to botanical gardens on space habitats. More recent movies presume robot learning as the key to sentience, either in a friendly Chitti (Enthiran, 2010) or in a darkly manipulative Ava (Ex Machina, 2014).

Along the way, science fiction created two expectations for robot learning. One is that learning will be as ubiquitous and easy for a robot as it is for humans. The second expectation is that learning leads to robot sentience. In Short Circuit (1986), the audience knew Johnny 5 was alive because it learned, comically, to cook and to drive a car. Unfortunately, neither expectation about robot learning has been met. There is no ubiquitous learning in robotics or even a consensus as to where learning fits into robot software architectures. Instead, different forms of learning have been applied to pattern recognition, skills, and intent, and the scope of these applications is too narrow to extend into sentience.

Learning to recognize objects and spoken language is essential for intelligent robots. However, although WABOT-2 was programmed to play a piano in 1985 (1), it was not until this decade that a robot could reliably recognize a piano. Recent advances in deep learning have provided exciting breakthroughs in recognition, but there are signs that progress may be reaching a plateau (2).

Learning a skill has also proven to be difficult. Robot skills, such as playing a piano, are often expressed as a policy that coordinates sensing and acting for a sequence of steps. A policy is typically learned through reinforcement learning, which involves hundreds or thousands of trials, similar to the years of practice it takes children to learn to walk and to grasp objects. Reinforcement learning is a time-consuming and risky process for an expensive robot that cannot be repaired with bandages and a hug. One solution is to use computer-based simulation, but simulation still takes time (and computational resources) and the results may have to be manually tweaked to work in the real world.

Another application is for a robot to learn the intent of a task, thereby eliminating the need to explicitly specify common-sense implications, such as moving down a hall also means moving out of the way of people. Understanding intent is a tough research problem in real life, but science fiction routinely assumes that robots will be able to understand our intent better than we do. For example, in Isaac Asimov’s fiction, humans created the Three Laws of Robotics, but it was a robot who abstracted the implied intent and created the Zeroth Law: A robot may not harm humanity or through inaction allow humanity to come to harm. There are exceptions to the benefits of robots attempting to understand intent, amusingly illustrated by John Carpenter’s 1974 movie Dark Star. In that movie, a robot “smart bomb” refuses to be launched because it will explode and, thus, die. Eventually, the crew convinces the robot bomb that its ultimate intent is to explode. Once convinced, the bomb does not wait to be launched; it explodes immediately, killing the crew.

Not only is robot learning in real life not ubiquitous and not easy but also it does not lead to sentience. Robots have been learning by demonstration since the 1970s, when companies discovered that manually programming industrial manipulators for a new task was difficult and expensive, yet factory robots are not sentient. In learning by demonstration, a robot observes a human perform the task and then performs the steps itself. Although the robot is learning to imitate a person, it is hard to imagine that this type of learning could be extended into the autonomy and self-awareness of Johnny 5 or Ava.

However, science fiction does get one important aspect of learning correct: The real challenge may be getting the robot to learn the right thing. This challenge was augured in possibly the earliest science fiction story about robot reinforcement learning, “Callahan and the Wheelies,” by Stephen Barr in 1960. In that story, the little wheelie robots learn to move and navigate. They also learn to run away from, and eventually attack, their designer because he is associated with turning them off at night. That plot twist seems far-fetched, but a paper published in the 2018 Artificial Life Conference proceedings called “The surprising creativity of digital evolution” has a list of equally unexpected things that real systems have learned when given incomplete goals and bounds (3). Examples include robots that learn to walk over walls instead of going around them, legged robots that walk instead on their elbows, robots that deliberately lie when they find food to deceive competitors, and robots that spin rather than move in a straight line.

What can we take away from nearly 70 years of science fiction and scientific research about robots? That learning may seem easy for humans but is really hard for robots. Progress in robot learning is accelerating in the areas of recognition, skills, and intent, but for the time being, we should discard the notions that a robot will become a virtuoso at a new task overnight and that learning means sentience.


Stay Connected to Science Robotics

Navigate This Article