Research ArticleARTIFICIAL INTELLIGENCE

Soft robot perception using embedded soft sensors and recurrent neural networks

See allHide authors and affiliations

Science Robotics  30 Jan 2019:
Vol. 4, Issue 26, eaav1488
DOI: 10.1126/scirobotics.aav1488
  • Fig. 1 Soft actuator design.

    (A) Side view of the computer-aided design (CAD) of the soft actuator with infrared-reflective balls for tracking the motion of the tip. Embedded sensors are used to estimate the coordinates of the tip and the forces applied by the actuator when in contact. The plots we present in this paper describe the position of the marker at the tip relative to the marker at the base. (B) Physical actuator with embedded soft sensors.

  • Fig. 3 Predicted motions of the fingertips.

    (A) With the cPDMS sensors. The case of applying contact around the center of the finger is shown. The tip was still free to move after the constraint was applied, but the kinematics changed. (B) With the cPDMS sensors. The case of applying contact around at the tip of the finger is shown. (C) With the flex sensor. Both cases of contact—one at the tip and the other near the center of the finger—are shown. The first constraint was at the tip, and the second constraint was near the center of the finger.

  • Fig. 4 Error plots for tracking.

    (A) With the soft cPDMS sensor. (B) With the commercial flex sensor.

  • Fig. 5 Force prediction at the fingertip.

    The raw load cell readings are filtered with a simple moving average filter with a 1-s window. External hand contact without the load cell is also shown.

  • Fig. 6 Test accuracy with virtual sensor removal.

    The performance is affected only a little when one of the sensor information is lost for the “no contact” case. The accuracy is considerably affected even with one sensor removal for the “with contact” case.

  • Fig. 7 Divisions of labor among the sensors.

    (A) For the case without contact, all the sensors have equal contribution to the underlying model. Hence, removing any one of them affects the prediction error slightly but equally in the workspace. For this case, removing the pressure information drastically reduces the accuracy, showing how motor action information is also important for accurate proprioception. (B) Division of labor among the sensors once in contact. Here, we can see division of labor among the sensors because there are no redundant sensors. Each sensor is “specialized” to a particular kinematic case as can be seen from the error distribution in the workspace.

Supplementary Materials

  • robotics.sciencemag.org/cgi/content/full/4/26/eaav1488/DC1

    Table S1. Training performance of the kinematic model.

    Fig. S1. Overview of the modeling architecture and its parallel to the human perceptive system.

    Fig. S2. Differences between a commercial flex sensor and the cPDMS sensor.

    Fig. S3. Sensor response to tip contact.

    Fig. S4. Intersensor dependencies.

    Fig. S5. Contribution of pressure information for drift compensation.

    Fig. S6. Schematic of sensor fabrication process.

    Fig. S7. Sensor topology.

    Fig. S8. Schematic of the motion of the 2-DoF actuators.

    Fig. S9. Schematic of the experimental setup.

    Fig. S10. Diagram showing how contact along the continuum of the actuator results in a deformation that propagates throughout the system.

    Fig. S11. Diagram of how we obtain the force measurement at the tip of the actuator using a load cell.

    Movie S1. Kinematic prediction with the cPDMS sensors—without contact.

    Movie S2. Kinematic prediction with the cPDMS sensors—contact at tip.

    Movie S3. Kinematic prediction with the cPDMS sensors—contact along the finger.

    Movie S4. Kinematic prediction with the commercial flex sensors—without contact.

    Movie S5. Kinematic prediction with the commercial flex sensors—contact at tip.

    Movie S6. Kinematic prediction with the commercial flex sensors—contact along the finger.

    Movie S7. Force sensing experiment with the cPDMS sensors.

    Movie S8. Experiment with the cPDMS sensors showing that the same learned model is sensitive to contact anywhere along the arm.

  • Supplementary Materials

    The PDF file includes:

    • Table S1. Training performance of the kinematic model.
    • Fig. S1. Overview of the modeling architecture and its parallel to the human perceptive system.
    • Fig. S2. Differences between a commercial flex sensor and the cPDMS sensor.
    • Fig. S3. Sensor response to tip contact.
    • Fig. S4. Intersensor dependencies.
    • Fig. S5. Contribution of pressure information for drift compensation.
    • Fig. S6. Schematic of sensor fabrication process.
    • Fig. S7. Sensor topology.
    • Fig. S8. Schematic of the motion of the 2-DoF actuators.
    • Fig. S9. Schematic of the experimental setup.
    • Fig. S10. Diagram showing how contact along the continuum of the actuator results in a deformation that propagates throughout the system.
    • Fig. S11. Diagram of how we obtain the force measurement at the tip of the actuator using a load cell.

    Download PDF

    Other Supplementary Material for this manuscript includes the following:

    • Movie S1 (.mp4 format). Kinematic prediction with the cPDMS sensors—without contact.
    • Movie S2 (.mp4 format). Kinematic prediction with the cPDMS sensors—contact at tip.
    • Movie S3 (.mp4 format). Kinematic prediction with the cPDMS sensors—contact along the finger.
    • Movie S4 (.mp4 format). Kinematic prediction with the commercial flex sensors—without contact.
    • Movie S5 (.mp4 format). Kinematic prediction with the commercial flex sensors—contact at tip.
    • Movie S6 (.mp4 format). Kinematic prediction with the commercial flex sensors—contact along the finger.
    • Movie S7 (.mp4 format). Force sensing experiment with the cPDMS sensors.
    • Movie S8 (.mp4 format). Experiment with the cPDMS sensors showing that the same learned model is sensitive to contact anywhere along the arm.

    Files in this Data Supplement:

Stay Connected to Science Robotics

Navigate This Article