FocusSENSORS

Robots learn to identify objects by feeling

See allHide authors and affiliations

Science Robotics  16 Dec 2020:
Vol. 5, Issue 49, eabf1502
DOI: 10.1126/scirobotics.abf1502

Abstract

Multimodal tactile sensors help robot hands accurately identify grasped objects by measuring thermal properties in addition to contact loads.

Robot arms on factory floors excel at repetitive assembly line tasks such as picking up a car door and aligning it to a car frame. These manipulators are typically designed to handle specific objects and perform particular tasks. The vision for robotics in the future, however, goes well beyond these specialized manufacturing robots. We want general-purpose robots in our homes assisting the elderly and performing daily chores such as loading the dishwasher or folding laundry (1). Realizing this vision requires robots with a sense of touch that are adept at handling a wide range of objects (of different materials, shapes, sizes, weights, and textures) in unstructured environments. Such dexterous robots do not exist yet because creating hardware for dense tactile feedback in robots has proven to be a major engineering hurdle (2). Writing in Science Robotics, Li et al. (3) describe a type of tactile sensor that simultaneously measures contact loads and thermal properties of an object in contact. The data obtained from 10 such sensors mounted on a robot hand can be used to identify grasped objects with high accuracy, underscoring the potential of thermal data in tactile sensing for robots.

Technologies for human skin-like electronics (or e-skins) in robotics has seen remarkable progress in recent years (4). Pressure-sensitive e-skins have matured substantially to provide sensitive force feedback (both normal and shear forces), allowing robots to pick up small and delicate objects (5). However, there have been far fewer attempts to create e-skins that sense multiple stimuli together—like static forces, vibrations, and temperatures (6, 7). Traditionally, methods to manufacture multimodal sensors have been cumbersome or the resulting sensors have been difficult to scale down in size.

Li and colleagues use simple fabrication methods to construct a sensor—called a quadruple tactile sensor—that reports the contact pressure, the temperature and thermal conductivity of an object, and the external temperature at once. The sensor is a stack of four individual layers. A clever feedback circuit is used to maintain a fixed temperature difference between two traces on the top and two traces on the bottom layers (see Fig. 1). When an object meets the top layer, the voltage required to maintain the fixed temperature difference in the top layer increases, thereby reflecting the thermal conductivity of the object in contact. Likewise, the pressure is measured by the bottom layer from the changes in the thermal conductivity of the porous material. Importantly, this elegant combination of sensor and circuit architecture reduces cross-talk between these signals.

After mounting 10 copies of these quadruple sensors across a robot hand (one at each fingertip and five distributed across in the palm), the researchers collected a dataset of outputs (each a 4 × 10 signal map) while grasping 13 different objects. These objects include balls and cubes of two sizes made of plastic, steel, and sponge and a human hand. A part of this dataset was used to train a machine learning (ML) classifier that could subsequently predict object identities from the signal maps. The object classification accuracy was found to be as high as 96% when using the entire signal from all 10 sensors. Li and colleagues’ work goes further and offers additional insights into what enables this high classification accuracy by dissecting the role of different signals and modalities. The ability to correctly identify an object dropped substantially when using pressure data alone (69.6%) or thermal conductivity estimates alone (68.1%).

Quite strikingly, a combination of pressure and thermal conductivity data could account for a classification accuracy of 92.3%, underscoring the usefulness of multiple sensing modalities in object identification. Although the researchers do not directly explore the optimal placement of sensors within the robot hand, this experimental platform presented here could prove to be a particularly good sandbox for these types of studies.

Despite the ease of fabrication, the sensor described here will likely need to be optimized in its form factor prior to wide adoption. The current thickness of ~6 mm may be prohibitive for use in some robot hands. Similarly, scaling down the lateral dimensions by a factor of ~10 will be worthwhile, because it will allow increasing the sensor density, opening up this sensor for other challenging applications, e.g., high spatial densities are required for robots to detect slip quickly (8). A scaled-down version of the sensor is also likely to provide improvements in speed and power consumption.

This work from Li and colleagues also highlights another broader challenge the field of robotic tactile sensing grapples with: The dataset collected here is highly specialized to their system. The field would benefit immensely with standards for tactile data—like image formats—that will allow interchangeable use of data generated across different teams. When these sensors were used in garbage sorting applications, the authors observed that it was more difficult to generalize the ML model (and detect new unseen objects of the trained classes) with small datasets—a well-known issue in the field. Standardization of tactile data will boost the amount of data that any team can access. Overall, this work brings attention to a new dimension in tactile sensing for robotics at a time of exciting progress in the field.

Fig. 1 Multimodal tactile sensors for robot hands.

(Right) The tactile sensor consists of four layers: top and bottom sensor layers are separated by a decoupling layer and a porous material. The two sensing layers are functionally identical; they both include a central hot trace and a surrounding cold trace patterned out of temperature-sensitive chrome/platinum thin films. When powered by a constant voltage, the less resistive hot trace is subject to more Joule heating than the more resistive cold trace. To extract the four parameters out of this sandwich, an electrical feedback circuit is used to enforce a constant temperature difference (CTD) between the hot and cold traces. (Left) The sensor is mounted at 10 locations on a robot hand: one at each fingertip and five in the palm. (Bottom) The data collected from the sensors are used to train a multilayer perceptron model that subsequently learns to identify objects held by the robot arm from the tactile signal maps alone.

CREDIT: ADAPTED FROM FIGURE 1 OF (3)

REFERENCES

Stay Connected to Science Robotics

Navigate This Article