FocusHUMANOIDS

A whole-body support pose taxonomy for multi-contact humanoid robot motions

See allHide authors and affiliations

Science Robotics  20 Dec 2017:
Vol. 2, Issue 13, eaaq0560
DOI: 10.1126/scirobotics.aaq0560

Abstract

A taxonomy of whole-body support poses promotes representation, recognition, and generation of multi-contact humanoid robot motions.

Humanoid robot performance regarding locomotion and balancing is still far from being reliable, robust, and versatile. Such performance could be greatly improved by using extra hand supports, but the generation of multi-contact motions remains, from an algorithmic point of view, challenging. Conversely, humans have extraordinary locomotion and balancing capabilities, which they achieve using their whole body and different contacts with the environment. Learning from human observation allows us to gain insight into how humans use support contacts and how to apply techniques from machine learning to multi-contact locomotion, similar to (1, 2), where contact sequencing is used to classify, understand, recognize, and reproduce manipulation actions.

Although human motion has been widely studied, the study of motions with multiple contacts is much less common. Few works have addressed the question of how humans use their hands for support when performing locomotion and manipulation tasks (3). The dimensionality of the space of body configurations using the environment for support is very high. Whole-body space dimensionality reduction has been used extensively (46) However, it has not been applied to simplify and structure the space of whole-body support poses as it was done in the area of grasping to derive grasp taxonomies (7, 8). Building on previous works (9) and inspired by the grasping taxonomies, we propose a taxonomy of human body poses that use contact with the environment for support (Fig. 1). This taxonomy resulted from combinatoric investigation of all possible combinations between body parts in contact with the environment and possible contact types. We also present 388 motion recordings of humans performing tasks with multi-contacts and their analysis regarding the support contacts used. The analysis allows us to partially validate the taxonomy in a data-driven way and to study human contact sequencing to understand, represent, and generate multi-contact motions.

Fig. 1 The whole-body support pose taxonomy.

Each sketch represents all the poses with the same number and type of contacts. Each class includes the right/left symmetric cases when applicable. The lines represent possible transitions between poses. Below, two examples of detected transitions between support poses in two motion recordings, on the left showing the translation achieved on each pose and on the right the duration in seconds spent on each pose.

Credit: A. Kitterman/Science Robotics

Conceptually, the problem of using multi-contacts with the environment by a human or a humanoid robot for support and balance is similar to a hand holding an object with a grasp, where the body of the human or the robot plays the role of both the hand and the manipulated object. Humans establish contact with the environment and exploit contact-based interactions for three reasons: to manipulate an object, by accident, or to provide balance support. Our proposed taxonomy deals only with the support contacts.

Each row in the taxonomy corresponds to a fixed number of contacts and supports, ranging from 1 to 4. Poses with more contacts are considered resting positions, listed in the last column, and correspond to poses where there is contact with the torso, similar to the palm contact in power grasps. The contact area increases from left to right depending on the contact type, including hand, foot, arm, and knee contacts (labeled H, F, A, and K, respectively) and a hand grasping a handle (labeled G). For the poses F2A1 and F2A2, we considered two types of arm contact, one with contact with the forearm, labeled A1, and another with the arm, labeled A2.

The complexity of the body allows many other types of contact, ranging from fingertip light touch to full palm planar contact, and similarly for the feet. Consequently, the resulting number of possible support poses rapidly grows, increasing the problem complexity. In section 1 of the Supplementary Materials, we provide the combinatoric formula to calculate the number of poses for a given number of possible contact types, and we list the criteria to discard some of them to reach the final number of 36 + 10 poses that are shown in the taxonomy.

To validate the taxonomy, we recorded participants performing multi-contact locomotion with a VICON marker-based passive-optical motion capture system running at 100 Hz. We considered 17 different types of motions grouped into four tasks: walking with supports, going up or down a set of stairs with a handrail, stand-to-kneel/kneel-to-stand motions, and crawling. In total, we analyzed 388 motion recordings that are mostly performed by four healthy individuals with a mean height of 1.80 m and mean weight of 70.7 kg. Details of the experiment can be found in section 5 of the Supplementary Materials.

The results of our analysis for two example motions are shown at the bottom part of Fig. 1. Using the method proposed in (9), we detected support contacts and segmented the motions according to visited support poses. We recorded both the spatial displacement achieved in each pose transition and the time spent in each pose, among other data such as body configuration during transition, velocity, and center of mass location. We were unable to detect grasp contacts because, currently, it is not possible to capture hand motions. Therefore, only the poses without grasp contacts are validated. Figure S2 shows pose transitions obtained with all motion recordings used in our analysis.

The main results of our analysis are the identification of different contact sequencing strategies when using single hand supports during walking and the observation of significant differences in body configurations for the same support poses in kneeling motions. Details can be found in sections 3 to 5 of the Supplementary Materials. Our analysis contributes to the locomotion literature debate about whether the sequence of supports must be determined a priori or whether it is instead the result of an optimization problem (10). Our work reveals that, despite the a priori appearance of completely different sequence of contacts for the same motion, deterministic strategies on how to use supports exist and that they can be identified, learned, and reproduced by robots. Therefore, the community can greatly benefit from learning techniques that simplify the challenging problem of multi-contact motion planning. Future work includes recording new multi-contact motions considering hand data to validate the full taxonomy and learning motion primitives associated to pose transitions for the generation of multi-contact motions.

In conclusion, the proposed whole-body support pose taxonomy and our data-driven analysis provide a benchmark for structuring the support poses space and a tool for studying, representing, and understanding the use of support contacts for whole-body humanoid motion generation.

SUPPLEMENTARY MATERIALS

robotics.sciencemag.org/cgi/content/full/2/13/eaaq0560/DC1

Supplementary Text

Fig. S1. Types of possible support contacts of the body with the environment.

Fig. S2. Transition graph of whole-body pose transitions automatically generated from the analyzed motions.

Fig. S3. Bar charts showing the displacement of the participant center of mass associated to every detected support pose.

Fig. S4. Bar charts showing the CoM displacements associated to all detected support poses during all the motions using stairs with a handrail.

Fig. S5. Confusion matrices that show the similarity between support pose sequences of all the motions, using the word error rate in eq. S2.

Fig. S6. Confusion matrices that show the similarity between support pose sequences where the pairs of support poses listed in the two rows of table S1 are considered the same.

Fig. S7. Timeline of pose transitions for a kneeling down motion with left hand support.

Fig. S8. Just before knee contact, the configuration of the support pose changes significantly.

Fig. S9. Setup used for motion capture, corresponding to a motion with supports with both hands walking on a beam.

Table S1. Walking with one support transition clusters.

Table S2. Description of the analyzed motions.

Table S3. Description of subjects.

References (1123)

References

Acknowledgments: The research leading to these results has received funding from the European Union Seventh Framework Programme under grant agreement no. 611832 (WALK-MAN).
View Abstract

Navigate This Article