Research ArticlePROSTHETICS

BMI control of a third arm for multitasking

See allHide authors and affiliations

Science Robotics  25 Jul 2018:
Vol. 3, Issue 20, eaat1228
DOI: 10.1126/scirobotics.aat1228

Abstract

Brain-machine interface (BMI) systems have been widely studied to allow people with motor paralysis conditions to control assistive robotic devices that replace or recover lost function but not to extend the capabilities of healthy users. We report an experiment in which healthy participants were able to extend their capabilities by using a noninvasive BMI to control a human-like robotic arm and achieve multitasking. Experimental results demonstrate that participants were able to reliably control the robotic arm with the BMI to perform a goal-oriented task while simultaneously using their own arms to do a different task. This outcome opens possibilities to explore future human body augmentation applications for healthy people that not only enhance their capability to perform a particular task but also extend their physical capabilities to perform multiple tasks simultaneously.

INTRODUCTION

Humans have always tried to augment their physical and cognitive capabilities. The question of whether humans will be able to control body augmentation devices with their brains is an active topic of discussion in the scientific community. Recent advancements in robotics and neuroscience are enabling the development of key technologies that may allow humans to augment their capabilities by adapting to new external interfaces.

The field of human body augmentation aims to explore the use of artificial external devices to increase the physical capability of able-bodied individuals (1). The concept of human augmentation is not new; in fact, the goal of bionics research, as stated by von Gierke et al. half a century ago, is to extend human physical and intellectual capabilities by prosthetic devices in the most general sense (2). Exoskeletons (3), for instance, are well-known examples of the integration of humans and machines to enhance human physical abilities such as strength or endurance. More recent studies have proposed augmentation devices in the form of supernumerary robotic limbs (SRLs), such as wearable robotic arms (4) or fingers (5) that are able to support heavy objects (6), grasp multiple objects (5), or play instruments (7). The methodologies to control these devices range from manual operation through a joystick to using electromyogram signals from muscle impulses (5) from other limbs. In these cases, the augmentation system receives a control command by decoding the user’s intention through the movement of the body part.

Currently, SRLs are used for collaborative tasks. For instance, Bretan et al. presented a study in which a robotic prosthetic limb with an attached drumstick end effector [originally intended for an amputee drummer (7)] was attached to the shoulder of the drummer and was activated by foot to simultaneously play music in collaboration with the real hands. Parietti and Asada developed wearable robot arms attached to the wearer’s body that can work closely with the wearer by holding an object, positioning a workpiece, and operating a powered tool (4). Another wearable robot attached to the human waist developed by the same team efficiently supports the body when the human is taking fatiguing postures, for example, hunching over, squatting, or reaching toward the ceiling (6). Wu and Asada presented wrist-mounted robot fingers that assisted the human hand in performing a variety of tasks such as grasping and holding objects (5). Llorens-Bonilla et al. presented two additional robotic arms worn through a backpack-like harness that tracked user’s hands to activate and to assist the user by holding objects, lifting weights, and streamlining the execution of a task (8). In these cases, the system receives a control command by decoding the user’s intention through the movement of the body part.

Multitasking, on the other hand, involves performing two independent tasks simultaneously. Other than using artificial limbs that hold objects so operators could free their hands to engage in another task (4, 5, 8), we know of no studies that have investigated participants performing two completely different tasks simultaneously by using an SRL in parallel with their own limbs. The use of SRLs for multitasking may not only enhance the capability to achieve a particular task but also extend the number of tasks a human can perform simultaneously. In contrast to SRLs for collaborative tasks that depend on control signals decoded through the movement of a certain body part, SRLs for multitasking need to decode the user’s intention without considering the movement of other body parts, because these body parts are engaged in another task. This may be possible if the intention of the user is decoded directly from the signals of the brain. Controlling body augmentation devices with the brain and multitasking are two of the main goals in human body augmentation.

Recent advances in invasive brain-machine interface (BMI) have allowed the monitoring and decoding of neural activity from brain areas, such as sensorimotor cortex, through brain implants, which bypass the activity of the muscles and deliver control commands to an external assistive device such as a robotic arm (9). Moreover, extensive research with noninvasive BMI systems has shown that users do not need to undergo brain implant surgery to be able to perform a single task by controlling devices such as a virtual keyboard (10), a wheelchair (11), or a robotic arm (12). These systems usually require high levels of concentration for the user to control the robotic device that substitutes for the user’s ability to speak or move. Until now, BMI systems have been used for recovery or replacement of a lost ability, but not to enhance or extend the abilities of the person. To our knowledge, no BMI studies have explored the control of an SRL to achieve multitasking. The use of a BMI system to control body augmentation devices to do multitasking not only encounters great challenges but also opens possibilities for healthy participants to extend their physical capabilities.

In this study, we present experimental evidence that healthy participants were able to control a human-like robotic arm by using a noninvasive BMI to achieve multitasking. Participants controlled the robotic arm to perform two tasks simultaneously: one task with the robot arm and a different task with their own arms. By imagining a goal-oriented action that activated the robot to grasp an object, participants simultaneously had to balance a ball placed on a board held with their own hands.

RESULTS

Fifteen participants participated in the experiment that consisted of grasping a bottle with a human-like robot arm by imagining the grasping action. The robotic arm was placed next to the participants to create the illusion that it was coming out of their own bodies, as shown in Fig. 1A. The experiment consisted of a baseline session for a parallel task (ball balancing), exploratory sessions for participants to get familiarized with the tasks, calibration and evaluation sessions for single and multiple tasks, a final balancing session, and a post-experimental survey, as explained in the Materials and Methods section.

Fig. 1 Experimental setup.

(A) Chair with a human-like robotic arm on its side. (B) Ball-balancing board containing color-shape markers. (C) For the single task, participants had to imagine the goal-oriented action of grasping or releasing a bottle with the robotic arm. (D) For the multitask, participants had to imagine the goal-oriented action of grasping the bottle while simultaneously balancing a ball on a board held with their own hands. Participants were asked to wear black gloves with long sleeves to avoid false-positive detections of the color markers by the camera during ball-balancing session.

During the single-task condition, participants were asked to imagine the action of grasping or releasing a bottle with the robotic arm. The experimenter positioned a bottle at a close distance to the robotic arm, so that it could be grasped, and took the bottle away when it was released, as shown in Fig. 1C. The robot arm had a preprogrammed movement trajectory from a resting position to a grasping position that was controlled by the power spectral density (PSD) of an electrode that was automatically selected during the calibration session. Each session consisted of 10 trials of 20 s each. Trials consisted of a grasping and a releasing period of 10 s each.

During the multitask condition, participants had to control the robot arm in the same way as in the single-task condition while simultaneously balancing a ball placed on a board, as shown in Fig. 1D. Twenty trials were collected to evaluate the performance of each trial, which was computed by using the precalibrated threshold, as explained below. The overall performance for the single task was 67.5% (median) and that for the multitask was 72.5% (median) with no statistical difference. The histograms of Fig. 2 show the distribution of participants with respect to their performances for the single and multiple tasks. Considering the visual appearance of the histogram distributions, we hypothesized that the distribution for the single task is unimodal and that for the multitask is bimodal, with two main groups of people: those who achieved good and bad performances. To confirm the hypothesis, we conducted the Hartigan’s dip-significance test (13): Only the multitask distribution is multimodal with P = 0.0001, indicating significant multimodality. The Gaussian mixture models (GMMs), along with the expectation maximization (GMM-EM) algorithm (14), were then used to compute the probability function that best fits the modalities found. Figure 2B shows a visual representation of the probability function resulting from the GMM-EM algorithm. The performance score (68.8%) corresponding to the boundary between the two modalities was used to separate the two groups: good performers (above the boundary) and bad performers (below the boundary). The graph in Fig. 3 shows the performance median score and the number of good performers (eight people, median = 85) and bad performers (seven people, median = 52.5) for the multitask condition.

Fig. 2 Histograms showing the distribution of people with respect to their performances for the single task and multitask conditions.

(A) Single task. (B) Multitask. Panel (B) also shows a visual representation of the probability function resulting from the GMM-EM algorithm. The performance score (68.8) corresponding to the boundary between the two modalities was used to separate the two groups: good performers (above the boundary) and bad performers (below the boundary).

Fig. 3 The two modalities for the multitask condition.

Performance median score and the number of good performers (eight people, median = 85) and bad performers (seven people, median = 52.5) for multitask condition. Error bars indicate min/max values.

As previously mentioned, the ball-balancing task was performed and evaluated at the beginning of the experiment (baseline), during the multitask session, and at the end of the experiment. The evaluation metric was computed by a color-shape detection algorithm that kept track of the number of times a yellow ball passed over the center of the evenly distributed colored markers placed on the balancing board (Fig. 1B) during each trial.

DISCUSSION

From the aforementioned outcomes, there are several points that can be discussed. First, more than half of participants (8 of 15) were able to achieve multitasking by controlling the robot arm with the BMI while simultaneously performing another task with their own arms. If we only compare the overall performance during the single task (median = 67.5%) and multitask (median = 72.5%), then there is not a significant difference; however, the histograms show that for multitasking, people can be classified as good or bad performers. On the other hand, for single-tasking, most participants achieved performance scores similar to those in traditional motor imagery–based BMI literature (15).

Although the reason why only some participants were able to achieve multitasking is not obvious, it may be possible that, for these participants, the brain activations while performing two tasks were easier to distinguish compared with the brain activation during the single task, which resulted in a better SRL control. On the other hand, the reason why bad performers failed to operate the third arm could be the complexity of performing two tasks simultaneously, which increased the overload of switching attention. The divided attention in two different tasks might not have allowed the participants to fully concentrate on both tasks at the same time, thus achieving a higher performance score in one task or the other but not both. This can be observed particularly in bad performers because they had higher balancing performance scores during the multitask evaluation session compared with good performers (fig. S6).

As previously mentioned, during the multitask condition, attention was drawn away from one task to another, which means that cognitive resources also shifted away from a primary task to a parallel task, producing an interference in successful task performance of the primary task. The effects of this interference are not completely known, but it may be possible that both tasks require the same active brain regions to process different kinds of information or that one particular brain region activates during a parallel task (i.e., ball balancing), affecting the activity of another brain region that becomes active during the primary task.

Figure S5 shows that there was a slight decrease in the balance performance during the multitask session compared with the initial balance session used as the baseline. This outcome seems reasonable given the complexity of multitasking. However, the balance performance was expected to be recovered in the final balancing session, because it was conducted under the same conditions as the baseline session, but this did not happen. It is possible that during the final ball-balancing session (after participants had already performed the same balancing task repeatedly), they felt more confident in their ball-balancing skill but performed the task with less accuracy. Because the vision system was programmed to assign higher scores when the yellow ball passed exactly over the colored markers of the board, participants who balanced the ball but did not meet this evaluation criterion received lower scores.

Other factors that may have affected the arm control performance outcome are (i) visual appearance of the robot, (ii) the degree of the illusion of ownership of the third arm that participants may have experienced during the experiment, and (iii) the type of task during the experiment. For the multitask condition, besides the aforementioned factors, the previous coordination and attention skill level of the participant also might have played an important role. Regarding the visual appearance, we know from previous studies that the human likeness of the robot may raise the illusion of body ownership transfer (BOT) in operators (1619). In our previous work, this sensation of owning the robot’s body was confirmed when operators controlled the robot either by performing the desired motion with their body or by using a BMI that translated motor imagery commands to robot movement (20, 21). In this experiment, we again used a human-like robotic arm to raise the illusion of ownership in participants, but this outcome was not fully achieved. According to the survey results (fig. S8), the scores directly related to body ownership (Q1 was “I felt I was looking at my own arm,” and Q2 was “I felt the robot arm was part of my body”) were not very high. The reasons for this outcome may be (i) the lack of induction of illusion (visual and touch feedback was not synchronized as during rubber-hand illusion), (ii) a discrepancy between robot arm movement and participants’ expectations, and/or (iii) the short amount of time that participants spent with the arm, which was insufficient to get used to it. Although the illusion of BOT was not fully achieved, there is a possibility that the human likeness of the arm used in the experiment contributed to the good control performance.

Another factor that may have also contributed to the control performance was the sense of agency, which refers to the sensation of being the agent or owner of one’s own action. This sensation was expressed by most participants because they believed that they were causing the robot arm to move (Q6), as shown by the high survey score (fig. S8). Regarding the type of experimental task, unlike most motor imagery–based BMI experiments that use a non–goal-oriented task (participants are asked to imagine moving their hand), in the proposed experiment, participants were explicitly asked to do a goal-oriented task (imagine grasping the bottle), which may have contributed to the operation of the robot arm using a BMI. Although in the current experiment there is not a direct comparison between a motor imagery task (move hand) and a goal-oriented task (grasp bottle), the evidence presented by Yong and Menon (22) indicated that goal-oriented tasks lead to a better classification performance compared with simple motor imagery. In fact, Pichiorri et al. also demonstrated that only those who adopted a goal-oriented hand-grasping imagination strategy showed significant training-induced changes in the transcranial magnetic stimulation functional map of the hand muscles and the brain network organization derived from electroencephalogram (EEG) signals (23).

By analyzing fig. S7, we note that gamma band was the most commonly selected frequency band by the algorithm among all participants. Numerous studies have experimentally linked gamma band to a wide range of cognitive processes that include learning, attention, memory, and visual-auditory perception (24). However, this band seems to be particularly relevant for enhanced processing of attended attention (25, 26). Moreover, other studies have shown that gamma-band response is also sensitive to various stimulus characteristics for object perception, including object familiarity (27), the category of the object presented (28), and successful memory encoding and retrieval of objects (29). Taking this previous evidence into consideration, it is likely that attention and object perception are closely linked to the result that gamma band was automatically selected for both experimental conditions, because both involved a goal-oriented task.

Unlike common motor imagery, which is characterized by modulation of pre-motor brain areas (mostly C3 and C4) during the movement of the hand or imagination of hand movement (23, 30), our experimental results show that participants were able to modulate other brain areas, such as the left and right frontal cortex (F3 and F4) and the left parietal lobe (P3). This finding can be correlated to numerous experimental research studies with EEG and functional magnetic resonance imaging that indicate that the large-scale networks spanning parietal and frontal cortex mediate selective attention (31). However, it could also be the case that the type of experimental task played an important role in activating these areas, because there is evidence that parietal and frontal cortical areas are important in the control of goal-oriented behaviors (32) such as the one observed during both experimental conditions.

As a final remark, the evidence presented in this manuscript reveals the ability of the human brain not only to achieve control of external devices but also to cope and adapt its modulation during demanding situations such as multitasking. This opens possibilities to explore other future applications that involve collaborative and parallel tasking using different types of SRLs. On the other hand, it is also important to develop future intelligent brain-controlled SRL that have context-aware capabilities that complement the brain-based command. Because there are different ways that the SRL can perform an action (i.e., different grasping configurations) depending on the context (i.e., type of the object), it could be possible that future SRLs will have vision capabilities to recognize the context and to optimize behavior to match user intention. This way, the intelligent SRL could increase the number of actions that it can perform with the same BMI-based command.

MATERIALS AND METHODS

Participants

Fifteen participants (11 males, 4 females; 14 right-handed, 1 left-handed) in the age range of 19 to 31 (mean = 24.53, SD = 6.89) were recruited for the experiment, most of whom were university students. All participants were naive to the research topic and had never used a BMI before. Participants received an explanation of the experiment and signed a consent form approved by the ethical committee of the Advanced Telecommunications Research Institute International, Kyoto, Japan. At the end of the experiment, participants answered a brief survey and were paid for their participation.

Experimental flow

The experiment consisted of the following activities:

1) Preparation. Participants sat in a comfortable chair and wore a 16-channel EEG cap g.Nautilus (g.tec, Austria). A reference electrode was mounted on the right ear and a ground electrode on the forehead. A human-like robotic arm was strategically placed on the left side next to the participants to create the illusion that it was coming out of their own bodies, as shown in Fig. 1. The five–degree of freedom robot arm is controlled by pneumatic actuators and is covered by a human-like skin silicon material.

2) Ball-balancing baseline session. Participants were asked to hold a ball-balancing board (width of 60 cm, height of 45 cm) that contained four markers of different shapes and colors placed 10 cm away from each corner of the board. A camera placed above the participant (facing in a downward direction toward the ball-balancing board) monitored the activity of the board. A color-shape detection algorithm kept track of the position of the markers and the trajectory of a yellow ball. Participants were asked to continuously balance the ball for 4 min by making the ball pass over each of the four markers. The algorithm generated a ball-balancing score given the following criteria: (i) the yellow ball had to “touch” the exact center of the marker to generate a point, and (ii) the final score was computed according to the distribution of total points among all markers (maximum score of 100% would mean that the percentage of points for each marker was 25%).

3) Exploratory session. This session served as “training session” for participants to get familiarized with the activities to be performed during the calibration and evaluation sessions of two experimental conditions: single task and multitask. During single-task condition, participants were asked to imagine “grasping a bottle with the robot arm” when an auditive cue (bell sound) was activated twice and the experimenter placed the bottle close to the robot arm. After 10 s, the bell sound was played once and the experimenter took away the bottle, at which time participants were asked to imagine “releasing the bottle and relaxing the arm” for another 10 s. Finally, after 10 more seconds, another bell sound was played to notify the participant the end of the trial, as described in fig. S1. During multitask condition, participants were asked to imagine the grasping and releasing actions when the bell sound was activated (in the same way it was done during single-task condition), but this time, they were asked to perform the ball-balancing task simultaneously in parallel to the grasp-release imagination. The exploratory session consisted of a total of 10 trials (5 trials for single task followed by 5 trials for multitask condition). There was a 2-s rest period between trials.

4) Calibration and evaluation sessions. During calibration, participants performed the single-task session for 10 trials continuously, whereas EEG data were processed to compute and collect the PSD online. The PSD data were used by an electrode-frequency band selection algorithm, as described in detail in the data processing section. During the evaluation session, the PSD data from the selected electrode and frequency band were mapped to the movement trajectories of the robotic arm. The evaluation session consisted of 20 trials with 2-min rests after the first 10 trials. Calibration and evaluation of the multitask condition were performed in the same way as previously described but adding the extra ball-balancing task in parallel to the “grasp-release” activity. The ball-balancing score was collected for each trial and averaged for all trials to compute the overall session score.

5) Final ball-balancing session. In the same way as in the baseline session, participants performed the nonstop ball-balancing task for 4 min, and overall scores were computed.

6) Post-experimental survey. Participants answered a post-experimental survey using a seven-point Likert scale from strong disagreement (one point) to strong agreement (seven points). The questions were designed to find out participants’ perceptions as described in the additional results section.

Data processing

The acquired data were processed online using Simulink/MATLAB (MathWorks). Data from nine selected electrodes (F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4, according to the international 10-20 electrode position system) were used for calibration and evaluation sessions. Although the EEG cap contained 16 electrodes, data from electrodes in the prefrontal were not used to avoid common artifacts caused by the movement of the eyes. Electrodes in temporal and occipital areas were not used due to their commonly known sensitivity to sound and visual stimuli, respectively.

Data processing included sampling at 250 Hz, cutting off artifacts by a notch filter at 60 Hz, bandpass filtering between 0.5 and 60 Hz, and adopting the short-time Fourier transform (STFT) to compute the PSD of five frequency bands: δ (1 to 4 Hz), θ (4 to 8 Hz), α (8 to 12 Hz), β (12 to 30 Hz), and γ (30 to 60 Hz). The STFT transform was applied within a time window of 50 samples that moved along the time series to characterize changes in the power of EEG signals over time for all nine electrodes. A spatial normalization was applied to the PSD of all electrodes, transforming the value of the highest PSD to 1 and the value of the lowest PSD to 0.

System calibration

The calibration consisted of the selection of the optimal electrode and frequency band relevant to the experimental condition (single task or multitask), as well as the configuration of parameters to control the robotic arm. An automatic selection algorithm was used to analyze the normalized PSD values of all electrodes and frequency bands throughout the 10 trials of the calibration session. For each trial, the average of normalized PSD values for the 10 s of “bottle-grasping” period (Embedded Image) and the average of normalized PSD values for the 10 s of “bottle-releasing” period (Embedded Image) were computed. At the end of the session, the average of Embedded Image and Embedded Image of all trials was computed, giving as result two thresholds τg and τr, plus an additional threshold, τ*, that represents the middle point between τg and τr, as shown in fig. S2. The electrode and frequency band with the longest distance between τg and τr were automatically selected to be used for the evaluation session.

Regarding the parameter configuration to control the robot arm, in the case that brain activity during the bottle-grasping period was an increase of power (+) relative to the other electrodes and a decrease of power (−) during bottle-releasing period, we mapped the values of τr and τg to a scale of [0 1], in which 1 activated the preprogrammed movement of “arm-raising” and “hand-grasping” and 0 activated the “arm-lowering” and “hand-opening” movements (fig. S3). In the case that decrease of power (−) was detected during bottle-grasping period and increase of power (+) was detected during bottle-releasing period, we inversely mapped the values of τr and τg to a scale of [0 1], and the corresponding robot movements were inverted.

Evaluation

During evaluation sessions, the normalized PSD values from the selected electrode-frequency band were mapped in real time to the corresponding trajectory of the robot arm (within the scale [0 1]) and activated the continuous movement. After each trial, the average of the normalized PSD values for bottle-grasping period (Embedded Image) and bottle-releasing period (Embedded Image) were computed and compared with the middle threshold τ* obtained during the calibration session. If Embedded Image and Embedded Image were correctly below or above τ* according to the preconfigured calibration parameters, then the trial was counted as correct. The final performance score of all trials consisted of the percentage of correct trials in the entire session. Therefore, the final score not only considered the number of correct bottle-grasping actions but also the number of correct bottle-releasing actions.

SUPPLEMENTARY MATERIALS

robotics.sciencemag.org/cgi/content/full/3/20/eaat1228/DC1

Supplementary Text

Fig. S1. Trial description.

Fig. S2. System calibration.

Fig. S3. Robot arm configuration.

Fig. S4. Overall performance of all participants.

Fig. S5. Overall balancing performance.

Fig. S6. Balancing performance for good and bad performers.

Fig. S7. Frequency bands and channel locations.

Fig. S8. Post-experimental subjective evaluation.

REFERENCES AND NOTES

Acknowledgments: We acknowledge substantial contributions to this work by B. Senzio-Savino Barzellato, who programmed the robot arm movement and assisted with conducting the experiment. However, B. Senzio-Savino Barzellato did not meet Science Robotics criteria (such as participating in writing, editing, or approving the manuscript) for authorship. Funding: Part of this work was funded by ImPACT Program of Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). Author contributions: C.I.P. and S.N. designed the experiment, created the procedures, discussed the results and organized the manuscript, and wrote manuscript drafts. C.I.P. analyzed the data and obtained the results. Competing interests: C.I.P. and S.N. are inventors on patent application (Japanese patent application no. 2018-032967) submitted by Advanced Telecommunications Research Institute International that covers the BMI control algorithm described in the paper. Data and materials availability: All data needed to evaluate the conclusions are present in the paper or Supplementary Materials. Contact C.I.P. for materials.
View Abstract

Navigate This Article