Research ArticlePROSTHETICS

A myoelectric prosthetic hand with muscle synergy–based motion determination and impedance model–based biomimetic control

See allHide authors and affiliations

Science Robotics  26 Jun 2019:
Vol. 4, Issue 31, eaaw6339
DOI: 10.1126/scirobotics.aaw6339

Abstract

Prosthetic hands are prescribed to patients who have suffered an amputation of the upper limb due to an accident or a disease. This is done to allow patients to regain functionality of their lost hands. Myoelectric prosthetic hands were found to have the possibility of implementing intuitive controls based on operator’s electromyogram (EMG) signals. These controls have been extensively studied and developed. In recent years, development costs and maintainability of prosthetic hands have been improved through three-dimensional (3D) printing technology. However, no previous studies have realized the advantages of EMG-based classification of multiple finger movements in conjunction with the introduction of advanced control mechanisms based on human motion. This paper proposes a 3D-printed myoelectric prosthetic hand and an accompanying control system. The muscle synergy–based motion-determination method and biomimetic impedance control are introduced in the proposed system, enabling the classification of unlearned combined motions and smooth and intuitive finger movements of the prosthetic hand. We evaluate the proposed system through operational experiments performed on six healthy participants and an upper-limb amputee participant. The experimental results demonstrate that our prosthetic hand system can successfully classify both learned single motions and unlearned combined motions from EMG signals with a high degree of accuracy. Furthermore, applications to real-world uses of prosthetic hands are demonstrated through control tasks conducted by the amputee participant.

INTRODUCTION

According to the latest statistics, there are approximately 540,000 and 82,000 upper-limb amputees in the United States (1) and Japan (2), respectively. Some reports have suggested that at least 50 to 60% of upper-limb amputees use prosthetic hands on a daily basis (35). There are three major categories of upper-limb prostheses: cosmetic, body powered, and externally powered (68). Myoelectric prosthetic hands are a type of externally powered prostheses and use electromyogram (EMG) signals, which are generated by muscle contractions reflecting a human’s internal state and motion intentions. Therefore, myoelectric prosthetic hands create the possibility for amputees to control their prosthetics like biological hands by extracting motion intent from EMG signals.

Myoelectric prosthetic hands have been extensively studied and developed in both commercial and research applications. For example, MyoBock (9), developed by Ottobock, is the most popular myoelectric prosthetic hand in the world and enables control of two hand movements (grasp and open). More recently, advanced prosthetic hands that can drive each finger have been developed. Commercially available examples include i-limb quantum (10), Vincent evolution 3 (11), and Michelangelo (9). Within the field of research, prosthetic hands with five independently driven fingers have been developed (1216). However, these myoelectric prosthetic hands are very expensive [from $25,000 to $75,000 for commercial prosthetic hands (17)], and their costs of maintenance and replacement make access difficult for large segments of the population.

In recent years, to improve the development cost and maintainability of prosthetic hands, three-dimensional (3D) printing techniques have been used to produce the hardware components of prosthetic hands (1821). There are some open-source projects for 3D-printed prostheses designs, which are available online and aim to be more accessible to amputees (2224). Although 3D-printed prosthetic hands have resulted in a marked reduction in production and maintenance costs, there are currently no examples of EMG-based motion classification and advanced control mechanisms based on human motion characteristics.

Motion classification of myoelectric prosthetic hands is generally achieved by extracting the operator’s intention from recorded multichannel EMG signals based on machine learning techniques, such as neural networks and support vector machines. Some previous studies proposed EMG classification methods that enable accurate classification of finger and hand movements (2528). Most of these methods, however, require large training datasets depending on the number of target motions required to realize the classification of many hand movements, resulting in an increased burden on users. It is therefore difficult to measure EMG signals corresponding to all possible motions and to solve complicated control problems using EMG signals. We believe that a practical prosthetic hand needs a mechanism classifying many motions from a smaller dataset of learned motions.

This paper proposes a 3D-printed myoelectric prosthetic hand with five independently driven fingers along with a control system. In the proposed system, the operator’s motions are determined on the basis of muscle synergy theory (29, 30). The theory suggests that the human motor system directly initiates movement through flexible combinations of muscle synergies. The term “muscle synergy” refers to a constitutional unit of operation that adjusts the activation of multiple muscles. There have been several attempts to extract muscle synergies from EMG and use them as inputs for motion or task classification (31, 32). In this paper, we focus on the transition and combination of extracted muscle synergies. Here, fundamental finger motions are regarded as muscle synergies, and various finger motions are expressed by combinations of these. Furthermore, the generation process of target movements was modeled using an event-driven model, thereby predicting the operator’s motions from previous muscle synergy generation. In this way, the proposed system supported the accurate prediction of unlearned combined motions using only learned single motions. Biomimetic control based on an impedance model was used to determine prosthetic hand control, enabling smooth prosthetic movements similar to that of the human hand and captured by EMG signals. We experimentally evaluated the validity of the developed prosthetic hand system in six healthy participants and an amputee participant.

RESULTS

Figure 1 shows an overview of the proposed prosthetic hand system, which is composed of four parts: EMG measurement, EMG signal processing, prosthetic hand control, and a myoelectric prosthetic hand. On the basis of measured EMG signals, muscle exertion was estimated with EMG signal processing. The motion of the operator was estimated based on muscle synergy extraction using a recurrent neural network and a motion-generation model, thereby allowing the expression of the unlearned combined motions using only the single motions learned in advance. The actuator commands were then determined using the biomimetic control based on the impedance model. Subsequently, the finger flexion of the prosthetic hand could be performed using a proportional-integral-derivative (PID) controller. The hardware of the proposed system consisted of 3D-printed parts and a microcomputer such that the proposed system could have high maintainability and portability (Fig. 2). EMG measurement, EMG signal processing, and prosthetic hand control were implemented in the electrical apparatus, which was composed of electrodes, a microcomputer, and a motor driver. The prosthetic hand measured 200 mm in length and weighed 430 g. The size of the developed circuit was 90 mm by 80 mm.

Fig. 1 Overview of the proposed prosthetic hand control system.

The control system is composed of four parts: EMG measurement, EMG signal processing, prosthetic hand control, and a myoelectric prosthetic hand.

Fig. 2 Hardware structure of the proposed prosthetic hand control system.

EMG measurement, EMG signal processing, and prosthetic hand control were implemented in the electrical apparatus, which is composed of electrodes, a microcomputer, and a motor driver. The exterior and many other parts of the prosthetic hand were printed using a 3D printer. To print the parts, we modified open-source 3D data (24).

We conducted operational experiments for the developed prosthetic hand and its control system. The validity of the proposed system for controlling the five fingers was verified through operational experiments conducted on six intact participants and an upper-limb amputee participant.

Experiment 1

This experiment was performed on six healthy participants and designed to confirm the effectiveness of the proposed system and the performance of its classification mechanism. In this experiment, we used five electrodes to measure EMG signals. The participants were asked to perform 10 motions (M1 to M10). M1 to M5 were single motions, and M6 to M10 were combined motions. First, the participants performed each single motion, where motion was recorded as training data. Next, they performed all 10 motions including the unlearned combined motions, where classification accuracy was calculated. During the experiment, the operation of the prosthetic hand was only conducted for participants 4 to 6; that is, the feedback of the classification results was given for them in real time. In contrast, participants 1 to 3 were not informed of the classification results. Example recordings in which control of the prosthetic hand was tested are shown in movies S1 and S2.

With respect to the experiments examining the control of five fingers, Fig. 3A shows the classified motion and normalized EMG signals for each channel, the force information FEMG, and the muscle contraction level α. Figure 3B shows the classified motion and each motor output angle in the following conditions: the equilibrium positions of the motor angle at zero (rad) and the direction to finger flexion as positive. The results of Fig. 3 (A and B) were measured simultaneously. The shaded areas represent the period of motion occurrence.

Fig. 3 Experimental results of the control of five fingers.

(A) Classified motion and corresponding normalized EMG signals for each channel, force information, and muscle contraction level. (B) Classified motion and corresponding motor output angle for each finger. In this experiment, the motions conducted by the participant were no motion (NoM), thumb flexion (M1), index finger flexion (M2), middle finger flexion (M3), ring and little finger flexion (M4), and grasp (M5).

Figure 4 shows examples of muscle synergies decoded from measured EMG signals using the proposed system. Regarding the single motions (M1 to M5), the waveforms during execution of each motion are shown (Fig. 4A). Regarding combined motions (M6 to M10), participants performed each combined motion by superimposing single motions on the time direction, the transition of normalized EMG signals, and corresponding muscle synergies until the target motion was exerted (Fig. 4B).

Fig. 4 Examples of normalized EMG signals and corresponding muscle synergies estimated using the proposed system for each motion.

(A) Results for the single motions (M1 to M5). (B) Results for the combination motions (M6 to M10). These time-series results are arbitrary 2-s cuts from data of a participant.

Figure 5A shows the confusion matrices for the classification of motions of all participants. The rows and columns correspond to the actual motions conducted and the classification results, respectively. Every participant provided a classification accuracy of >90% on average over all motions. In particular, participants 5 and 6 achieved an almost perfect classification accuracy of >98% in all motions. The average classification accuracies of the participants differed between the conditions without feedback and with feedback (Fig. 5B). In all motions, the average accuracy was 91.7% without feedback and 97.3% with feedback. Figure 5C shows recorded scenes during prosthetic hand control for all target motions.

Fig. 5 Experimental results for healthy participants.

(A) Confusion matrix for the classification of motions of participants 1 to 6. The classified motions are thumb flexion (M1), index finger flexion (M2), middle finger flexion (M3), ring and little finger flexion (M4), grasp (M5), pinch with two fingers (M6), peace sign (M7), pinch with three fingers (M8), index finger pointing (M9), and thumbs-up (M10). M1 to M5 are the single motions, and M6 to M10 are the combined motions. The color scale represents the accuracy in classification between pairs of classes in the confusion matrix. Participants 1 to 3 and 4 to 6 conducted the experiments under conditions without or with the feedback of their classification results, respectively. (B) Average classification accuracies over participants for the conditions without and with the feedback of classification results. Blue and red bars represent results without and with feedback, respectively. Error bars represent SE. (C) Scenes during prosthetic hand control.

Experiment 2

The operational experiment of the prosthetic hand was conducted for the amputee participant. In this experiment, we developed a newly constructed myoelectric prosthetic hand system for the amputee participant. Figure 6A shows the configuration of the experimental system. The developed prosthetic hand was attached to the tip of a forearm socket that was specially designed for the amputee participant. The control circuit and battery for prosthesis control were installed on the outside of the socket. Figure 6 (B to D) shows photographs of the myoelectric prosthetic hand system used in this experiment. Three electrodes for EMG measurement were built on the inside of the socket (Fig. 6B).

Fig. 6 Prosthetic hand system for experiment 2.

(A) The hardware composition of the prosthetic hand system used in experiment 2. The socket is specially designed for the amputee participant participating in the experiment. (B) Photograph of the prosthetic hand system. The EMG electrodes are built in the inside of the socket. (C) The control circuit is inside the 3D-printed box and attached on the outside of the socket. The battery is built onto the outside of the socket. (D) Photograph of the prosthetic hand fitted by the amputee participant.

The amputee participant was asked to perform five motions (M1 to M5). In this experiment, motions M1 to M4 were single motions, whereas motion M5 was a combined motion. For the single motions, we selected motions that are not independent five-finger motions but rather movements considered to be used frequently in everyday life. First, the single motions were performed by the amputee participant and then recorded as training data. Next, a classification experiment for each motion was conducted to evaluate classification accuracy of the motions. Last, the amputee participant conducted control tasks assuming actual scenes of using a prosthetic hand.

Examples of the extracted muscle synergies for the single motions and the combined motions are shown in Fig. 7 (A and B). Figure 7C shows a confusion matrix for the classification results of the motions. The average classification accuracy for all trials was 89.9% in learned single motions, 100.0% in unlearned combined motions, and 91.9% in all motions (Fig. 7D). Figure 7E shows the scenes during prosthetic hand control for all target motions.

Fig. 7 Experimental results for amputee patient.

(A and B) Examples of normalized EMG signals and corresponding muscle synergies estimated from the proposed system for each motion. (A) Results for the single motions (M1 to M4). (B) Results for the combined motions (M5). (C) Confusion matrix for the classification of motions. The classified motions are pinch with three fingers (M1), index finger pointing with thumb up (M2), thumb flexion (M3), grasp (M4), and index finger pointing (M5). M1 to M4 are the single motions, and M5 is the combined motion (M2 + M3). The color scale represents the accuracy in classification between pairs of classes in the confusion matrix. (D) Average classification accuracies for all trials. Error bars represent SE. (E) Scenes during prosthetic hand control.

Figure 8 shows recorded example participants undertaking the control tasks. The amputee participant controlled the myoelectric prosthetic hand, picked up a block (Fig. 8A and movie S3) and a plastic bottle (Fig. 8B and movie S4), and held a notebook (Fig. 8C and movie S5). During the block and plastic bottle tasks, he controlled the prosthetic hand while freely switching between the grasp motion and three-finger pinch motion. During the notebook task, he held the notebook by grabbing its left side with his normal hand and picking the right side of it with three of the prosthetic hand’s fingers. Examples of the time series changing of the classified motions, force information, and corresponding photographs during the task of the plastic bottle are shown in Fig. 8D. The amputee participant picked up the plastic bottle from the table with a three-finger pinch motion (2.3 s), put it down on the table, and then picked it up again (6.6 s). He also picked it up with a grasp motion (8.6 and 16.7 s).

Fig. 8 Scenes of the control tasks.

(A) Photograph of the block-picking task. (B) Photograph of the plastic bottle–picking task. (C) Photograph of the notebook-holding task. (D) Examples of the classified motions, force information, and corresponding photographs during the plastic bottle–picking task. Here, the amputee participant controlled the prosthetic hand and switched its motions among the open [no motion (NoM)], three-finger pinch (M1), and grasp (M4).

DISCUSSION

In experiment 1, the operator’s single motion was accurately classified by the proposed system based on the EMG signals as shown in Fig. 3. Each motor rotation angle slowly approached the target angle when α increased rapidly and then decreased slowly to the equilibrium position when α quickly dropped to zero (Fig. 3B). This result suggests that the motor rotation angles could be determined smoothly when considering the human’s impedance property based on biomimetic controls. In addition, although there were some instantaneous misclassified points (end points of M2 and M3), the motor output angles rose gently to about 1 to 2 rad. Therefore, the biomimetic control embedded in the developed system could restrain the influence of unexpected incorrect motion when a misclassification of motion occurred instantaneously.

In Fig. 4, the muscle synergies were successfully extracted from the measured EMG signals for each motion. For single motions, the unique synergy composing each motion could be obtained (Fig. 4A). In the case of executing the combined motions, multiple synergies are extracted with overlapping time series (Fig. 4B). For example, motion M6 (combination of motions M1 and M2) was executed by the transition of muscle synergies (synergies 1 and 2) constituting motions M1 and M2. This is because the participant performed such combined motions by transitionally combining all single motions that compose the target motion. From the above, the proposed system could classify unlearned combined motions based on the superimposed muscle synergies and their transitions. EMG pattern classification generally requires the learning of all target motions in advance. By contrast, the proposed system learned only basic motions and could combine them to express more complex motions. This mechanism therefore has an advantage in controlling a prosthetic hand that requires a lot of degrees of freedom.

The classification accuracy for healthy participants was investigated under conditions without and with feedback of classification results (Fig. 5, A and B). The participants with feedback (4 to 6) showed higher accuracies than the participants without feedback (1 to 3). This is because the participants with feedback could adjust their EMG pattern according to the feedback of the classification results (i.e., the performed movement of the prosthetic hand), which means that this situation is relatively closer to the actual one of using a prosthesis. Such differences due to the presence of the feedback are in agreement with previous studies (33, 34). Even in the case without feedback, which is the more challenging condition, the average classification accuracy reached >90%. Focusing on each participant’s motion, the classification accuracies of some motions (M10 of participant 2 and M3 of participant 4) were relatively low, around 60%. It is possible that this problem could be solved through sufficient training in the operation of the myoelectric prosthetic hand control. These results show that the proposed system could classify unlearned combined motions by using only learned single motions based on muscle synergy extraction and the motion-generation model.

The classification accuracy for the amputee participant was >89% in both single and combined motions (Fig. 7, C and D), meaning that the motions of the amputee participant can be classified with almost the same level of accuracy as that of the healthy participants. This suggests that the proposed system could be applicable to amputee participants. However, when focusing on each participant’s motion, only motion M4 had a low level of accuracy at approximately 66% (Fig. 7C). In this experiment, the participant performed motion M4 while co-contracting wrist flexion and wrist extension movements (see Materials and Methods). The amputee participant did not normally perform the co-contraction motion while using his own prosthetic hand (MyoBock hand) in his day-to-day activities; thus, this motion was unusual for him and may have been difficult to perform. This can be confirmed from the results of the extracted muscle synergies for M4 shown in Fig. 7A. Although M4 is a single motion, the extracted synergy pattern varies greatly during motion. Similar to the healthy participants, there is a possibility that the classification accuracy will be improved by training to generate/control voluntary EMG patterns. Such training may also provide an increase in the number of combined motions performed by the participant.

Furthermore, we tested the applicability of the proposed system through the control task conducted for the amputee participant. In this task, the participant was able to control the prosthetic hand while switching motions in response to different target objects (blocks, plastic bottle, and notebook). Although there were some momentary misclassifications in Fig. 8D and movie S4, their influence on the movement of the prosthetic hand was small because of smooth movements based on the biomimetic control. These results indicate that the proposed system can be applied to situations in which actual use of prosthetic hands is likely to occur.

Conclusion

In this study, we proposed a 3D-printed myoelectric prosthetic hand with five independently driven fingers. We also proposed a motion-classification method based on muscle synergy theory and a motion-generation model, thereby allowing the classification of unlearned combined motions using learned single motions. The biomimetic control based on an impedance model was included in the proposed prosthetic hand system so that the prosthetic hand can perform smooth movements similar to human hand flexion depending on force information.

In the operation experiments conducted for six intact participants and an amputee participant, we showed that the proposed system can classify 10 motions, including combined finger motions, at about 95% accuracy for the intact participants and about 92% accuracy for the amputee participant. The applicability of the proposed system to actual scenes showing use of the prosthetic hand was also demonstrated through control tasks conducted for the amputee participant.

Limitations and future work

In this study, we evaluated the proposed system through the operation experiments conducted for six intact participants and one amputee participant. The proposed next steps are (i) investigation of the long-term applicability of the proposed system, (ii) auto-optimization of preset parameters, (iii) development of a training environment for the proposed control system, and (iv) introduction of sensory feedback.

The duration of the longest conducted experiment in this study was 60 s. To investigate the real-world applications of the prosthetic hand in detail, it is required to perform a longer-lasting experiment (e.g., experiments that run for a few hours) and to evaluate its performance on more amputee participants. In addition, it has been known that the classification accuracy of EMG patterns decreases because of sweat, electrodes shifting, and muscle fatigue with prolonged use of the prosthetic hand; accordingly, incorrect movements easily occur (35, 36). In the accuracy evaluation experiments, the participants were instructed to maintain a certain posture. However, posture changes during use of prostheses have been known to cause skin/muscle shift, influencing the classification performance (37, 38). Considering the above points, improving the robustness of the EMG pattern classification should be prioritized in the future.

Meanwhile, in the proposed system, the prosthesis is in the open (no-motion) state when the operator relaxes; classified motion is executed when the operator executes an action with a force higher than a specified threshold. The operator is therefore required to constantly exert force to keep executing a single motion for a certain period, e.g., holding an object with grasp. This may lead to an increase in the burden on the user over long, sustained use of the prosthesis. This was observed in the control task conducted for the amputee participant; misclassifications occurred after the 9-s mark and were not seen at the beginning of the task (Fig. 8). We therefore plan to develop a control scheme that can dynamically change the threshold according to classified motions.

In the proposed system, the values of the modifying vector in the motion-generation model have to be set in advance (see Materials and Methods). For the conducted experiments in this paper, these vector values were determined by trial and error and were common in each experiment. In experiment 1, the values were tuned for a certain participant and were reused with other participants; nevertheless, the results showed relatively high classification accuracy. Although this suggests that the modifying vector has transferability to some extent, there is still room for optimization in the vector values. If we could implement a methodology to automatically determine the modifying vectors during the training process of the system, then it would be possible to construct a motion-determination scheme optimized for each user, resulting in more accurate and intuitive control of the prosthetic hand.

Before amputees start using the prosthetic hand, it is generally required that they undergo training at a medical institution. This is an important step for amputees to exert EMG patterns and develop strong voluntary control. In the previous study, various training systems were proposed for the prosthetic hand control (3941). Therefore, we plan to develop a training environment that conforms to the proposed prosthetic hand system.

Although the control task conducted in this study show the applicability of the proposed system to actual scenarios, the amputee must rely on visual feedback to adjust the motion or to exert the grasping force of the prosthetic hand. The introduction of sensory feedback into a prosthesis is important in both movement execution and force regulation. In recent years, there has been major progress in providing sensory feedback to prostheses via invasive means using peripheral nerve electrodes (42, 43) and noninvasive means using vibration (44) or electrical stimulation (45). This enables the amputee to perceive the sense of touch regarding the target object through the prosthetic hand. To develop a more practical and intuitively controllable prosthetic hand, it is also necessary to incorporate such sensory feedback mechanisms.

MATERIALS AND METHODS

System architecture

The full system and hardware structure are shown in Figs. 1 and 2. The exterior and many other parts were printed using a 3D printer. To print the parts, we modified open-source data released by the Open Hand Project (24). The bipolar electrodes used for measuring EMG signals were the same as those used in MyoBock hand (Ottobock 13E200, Ottobock HealthCare Deutschland GmbH). Each of the electrodes conducts differential amplification, bandwidth limitation (90 to 450 Hz), full-wave rectification, and smoothing processing to extract the amplitude information of EMG signals on its internal analog circuit.

The prosthetic hand control system uses a microcomputer (mbed LPC1768, ARM Ltd.) for A/D conversion, feature extraction, motion classification, and control of the prosthetic hand. This microcomputer outputs pulse width modulation signals as control signals to the actuators; regardless, its maximum output voltage is 3.3 V and is insufficient to drive the actuators. Therefore, we used the motor driver (DRV8835, Texas Instruments Inc.) to supply the drive voltage.

Each finger of the prosthetic has a wire that is wound in a spool (fig. S1, B and C). The spool is then rotated with a DC motor by the actuator so that the finger flexes. Moreover, the wire is pressed to the spool and fastened with a spring. Each finger has a DC motor and the same structure mentioned above, thus allowing the hand to drive each finger independently. In addition, the prosthetic hand has a servomotor on the carpometacarpal (CM) joint of the thumb (fig. S1C), giving the hand six degrees of freedom.

EMG signal processing

The EMG signal processing consists of four parts: feature extraction, muscle synergy extraction, a motion-generation model, and motion determination.

Feature extraction

First, the EMG signals that are measured from the L number of the electrodes are digitized by an A/D conversion at 200 Hz. Then, the processed EMG signals are filtered through the second-order Butterworth low-pass filters with a cutoff frequency of fc (Hz). These signals are converted into El(t) (l = 1,2, ⋯, L), which is normalized by the maximum value of each channel signal as followsEl(t)=EMGl(t)EMGlst¯EMGlmaxEMGlst¯,(1)where EMGlst¯ is the mean value of El(t) that is measured while the muscles are relaxed and EMGlmax is the maximum value of the EMG signals, which is configured beforehand. To estimate the operator’s motion, we then normalize El(t) to give the sum of El(t) as 1.0. These normalized signals are defined as a time-series EMG pattern x(t) = [x1(t),⋯, xl(t),⋯, xL(t)]Txl(t)=El(t)l=1LEl(t)(2)

Moreover, force information FEMG(t) is calculated from the EMG signals asFEMG(t)=1Ll'=1LEl(t)(3)which is used to determine a motion occurrence and control the prosthetic hand. When FEMG(t) is greater than the preset threshold Fth, it is recognized as an occurrence of motion.

Muscle synergy extraction

In the muscle synergy extraction (46), the independent motion of each of the five fingers is regarded as one of several single motions (muscle synergy) that make up a combined motion, such as the opening and closing of a hand. Each muscle synergy pattern u(t) is extracted from the time-series EMG pattern [x(t), x(t − 1),⋯, x(tT + 1)] ∈ ℜL × Tu(t)=Ftrans(x(t),x(t1),,x(tT+1))(4)

In addition, u(t) = [u1(t),⋯, uc(t),⋯, uC(t)]T ∈ ℜC (C is the number of single motions) satisfies the following conditionsc=1Cuc(t)=1(5)Ftrans(⋅) is a function that transforms the time-series EMG pattern [xc(t), xc(t − 1),⋯, xc(tT + 1)] of the cth single motion into uc(t) = Ftrans(xc(t), xc(t − 1),⋯, xc(tT + 1)) ∈ ℜC, where uc(t) is the unit vector whose cth value is one. Considering that u(t) during the combined motion is represented by a linear combination of uc(t) that has combination ratio a^c as a weight coefficient, u(t) can be expressed asu(t)=c=1Ca^cuc(t)(6)=[a^1,,a^c,,a^C]T(7)because uc(t) is an organized orthonormal system. Thus, if the function Ftrans(⋅) is found from the time-series EMG patterns of the single motions, the combination ratio a^c of each single motion can be calculated by converting the EMG patterns of the combined motion into u(t). Here, the recurrent log-linearized Gaussian mixture network (R-LLGMN) proposed by Tsuji et al. (47) is used for the derivation of Ftrans(⋅). This network consists of a Gaussian mixture model and a hidden Markov model and deals with the time-series characteristics of the operator’s motions. This R-LLGMN was made to learn the time-series EMG pattern of single motions so that it is possible to obtain Ftrans(⋅), which has learnt the relationship between the operator’s EMG signals and muscle synergy.

Motion-generation model

When performing motions expressed by the combination of muscle synergies, it can be assumed that all of the synergies that constitute the motion are not generated simultaneously; rather, they are generated serially through the combination of other individual synergies. That is, if movement is broken down to the muscle synergy level and presumed respectively, then the process of motion generation can also be presumed. In the motion-generation model, the operator’s motion is predicted from the generation history of the muscle synergies extracted in the muscle synergy extraction part, and a modifying vector according to the estimated process is output.

Here, the motion-generation model for five finger movements presented is based on an event-driven model (48) described using Petri net. The general form of the motion-generation model is shown in fig. S2. In the motion-generation model, according to the operator’s motion state, the modifying vector γm is selected and sent to the motion determination partγm=[γm1,γm2,,γmg,,γmG]T(8)where m (m = 0,1,2,⋯, G; G is the number of all motions for classification) is the index of the current operator’s motion and γmg indicates the modifying value from the current operator’s motion m to the next operator’s motion g. m = 0 means an initial state, that is, a no-motion state.

Motion determination

This part estimates the operator’s motion by using the muscle synergy pattern u(t) and the modifying vector γm obtained in the motion-generation model part. The operator’s motion is determined by comparing the muscle synergy pattern u(t) with the preset basis pattern u^(g) corresponding to motion g. The degree of similarity for motion g is defined asSg(t)=112c=1C(uc(t)u^c(g))2(9)

Because the second term on the right-hand side of Eq. 9 represents the distance between uc(t) and u^c(g), the degree of similarity is set to 1 when both patterns are in agreement. The products of Sg(t) and the modifying vector γm, which is an output of the motion-generation model, are obtained asSg(t)=γmgSg(t)(10)where Sg(t) is the degree of similarity taking into consideration the history of motion generation. The operator’s motion is determined by deriving a maximum Sg(t) motion. From the above procedure, the operator’s motions can be classified by considering the muscle synergy and the transition among motions.

Impedance model–based control

The prosthetic hand control system operates its motors based on information given by forces on the prosthetic, the operator’s motion that was estimated by EMG signal processing, and biomimetic control (49). The block diagram of the biomimetic control is shown in fig. S2.

First, the control motors are chosen on the basis of the estimated operator’s motion. The command angle of the motors is calculated on the basis of the impedance model. Then, the motion equation for the motor, j (j = 1,2,⋯, J), is defined as followsIjθ¨j+Bj(α)θ̇j+Kj(α)(θjθj0)=τjτjex(11)where Ij, Bj(α), and Kj(α) are the inertia, the viscosity, and the stiffness, respectively. θj and θj0 are each motor rotation angles and equilibrium positions of the angle. Here, Bj(α) and Kj(α) are defined asBj(α)=bj,1αbj,2+bj,3(12)Kj(α)=kj,1αkj,2+kj,3(13)where {bj,1, bj,2, bj,3} and {kj,1, kj,2, kj,3} are the impedance parameters. Moreover, τj and τjex are the motor torque and external torque, respectively. Here, the viscosity and stiffness are made to change depending on the muscle contraction level, α, so that it can represent characteristics accompanying muscle activity. α can be derived from Fgmax, which is measured as FEMG(t) during maximum muscle contractionα(t)=FEMG(t)Fg=gmax(14)where g' is the operator’s motion estimated by the EMG signal processing. Furthermore, the torque τj can be calculated as followsτj(t)=α(t)τjmax(15)where τjmax is the preset maximum value of the torque. The command angle θj of motor j is calculated by solving Eq. 10, thereby allowing reflection of the inertia, the viscosity, and the stiffness of human hands. Therefore, the prosthetic hand can be controlled smoothly like a human hand.

Participants

Six intact young adults (males, right handed, mean age: 23.7 ± 0.58) and one upper-limb amputee (male, age 52, amputation site 14 cm below the right elbow) were voluntarily recruited in experiments 1 and 2. The amputee participant had used a myoelectric prosthesis (MyoBock hand) for 17 years. They were told the aim of the study and provided written informed consent before participating in the experiments. This study was approved by the Hiroshima University Ethics Committee (registration number E-840) and the Human Research Ethics Committee of the Hyogo Institute of Assistive Technology (registration number R1701B).

Experiment 1

The six intact participants participated in experiment 1. The participants were seated in front of a table during the experiment. The right elbow was flexed at 90° and placed on the table. The participants were instructed to maintain the posture during EMG recordings. EMG signals were measured from the five electrodes (L = 5) attached to the right forearm (fig. S4A). The target motions consisted of five single motions and five combined motions (table S1): thumb flexion (M1), index finger flexion (M2), middle finger flexion (M3), ring and little finger flexion (M4), grasp (M5), pinch with two fingers (M6), peace sign (M7), pinch with three fingers (M8), hold up an index finger (M9), and thumbs-up (M10). To examine the influence of the feedback of the classification results on the performance, we divided the participants into two groups based on the presence of the feedback.

In the experiment, the participants first performed each single motion for 2 s, and training data were recorded. For training of R-LLGMN, 20 samples randomly sampled from the training data were used for each motion. The participants then performed the five single motions in succession to confirm the relationship between the classified motion, normalized EMG signals El(t), force information FEMG(t), muscle contraction level α, and the motor rotation angle θ. After that, the participants performed all the 10 motions, including the unlearned combined motions, in random order. For the participants without feedback (1 to 3), the prosthetic hand did not operate during the experiment, meaning the classification results were not informed. In contrast, the participants with feedback (4 to 6) were able to ascertain the classification results in real time because they carried out control of the prosthetic hand during the experiment. Each motion was recorded for 10 s, and five trials were performed.

Classification accuracy was defined as the time when a participant’s motion corresponded with the target motion after 5 s, and it was calculated after the experiments. In these experiments, the cutoff frequency was fc = 1.0 Hz and the external torque was τjex=0 N·m. The values of the modifying vectors were determined for the first participant (1) by trial and error and were set in common for the other participants (table S2). The impedance parameters in the biomimetic control were set as shown in table S3, which were determined on the basis of the experimental results of the human wrist impedance measurement (49). Furthermore, the control cycle of the microcomputer was 5.0 ms, and a differential component was not used in the PID controller.

Experiment 2

This experiment was performed on the amputee participant. The participant was seated comfortably in front of a table and mounted the myoelectric prosthetic hand system (Fig. 6). The system used in this experiment consisted of the developed prosthetic hand, socket, battery, control circuit, and circuit box. The socket was specially designed for the amputee participant. The battery used in this system was an Ottobock EnergyPack 757B21 (Ottobock HealthCare Deutschland GmbH, Duderstadt, Germany), which is the same battery used for the MyoBock hand. EMG signals were measured from the three electrodes (L = 3) embedded in the inside of the socket (Fig. 6B). We carefully selected the electrode positions, taking into consideration the amputation site of the participant and following the instructions of the occupational therapist (fig. S4B). The small pieces of silicone gel made from supersoft urethane resin (shore hardness, 5) were glued to all fingertips of the prosthetic hand to stabilize the holding ability for objects. In this experiment, the target motions consisted of four single motions and one combined motion (table S4): pinch with three fingers (M1), index finger pointing with thumb up (M2), thumb flexion (M3), grasp (M4), and index finger pointing (M5). These motions are not independent, five-finger motions but are considered to be used frequently in everyday life. These were chosen because it was difficult to realize many motions with the conditions of the residual muscles of the participant. In addition, because the participant could not perform each motion while directly imagining the target motions, we asked him to associate the target motions with execution motions that he could imagine as follows: M1, wrist flexion; M2, wrist extension; M3, little finger flexion; M4, co-contraction of wrist flexion and wrist extension.

First, the collection of the training samples of each single motion (M1 to M4) was carried out in the same way as experiment 1. The participant was then asked to attempt the following two tasks: an evaluation task of classification accuracy and a control task of the prosthetic hand. In the evaluation task, to evaluate classification accuracy of the single and combined motions, the participant was asked to perform each motion, including the combined motion, for 10 s over two trials. The classification accuracy of motions was then calculated as in experiment 1. During the training data collection and the evaluation task, the participant was instructed to maintain his posture. In the control task, we gave the participant three types of objects—blocks, a plastic bottle, and a notebook—to be controlled. The participant was asked to pick up or hold these objects with the prosthetic hand freely. The control task for each object was conducted for 60 s. An arbitrary rest was taken between each task to avoid muscle fatigue. The cutoff frequency fc, external torque τjex, impedance parameters, control cycle of the microcomputer, and type of controller were also the same as in experiment 1. The values of the modifying vectors used in this experiment were shown in table S5.

SUPPLEMENTARY MATERIALS

robotics.sciencemag.org/cgi/content/full/4/31/eaaw6339/DC1

Fig. S1. Exterior and inside of the prosthetic hand and structure of the finger.

Fig. S2. General form of motion-generation model.

Fig. S3. Biomimetic impedance control system.

Fig. S4. Locations of electrodes.

Table S1. List of target motions for classification in experiment 1.

Table S2. List of modifying vectors used in experiment 1.

Table S3. List of impedance parameters used in the experiments.

Table S4. List of target motions for classification in experiment 2.

Table S5. List of modifying vectors used in experiment 2.

Movie S1. Control of five fingers.

Movie S2. Grasping of plastic bottle.

Movie S3. Block-picking task.

Movie S4. Plastic bottle–picking task.

Movie S5. Notebook-holding task.

REFERENCES AND NOTES

Acknowledgments: We thank Y. Honda, F. Mizobe, T. Shibanoki, and T. Takaki for helpful comments leading to design approaches; Y. Yamada for help in developing the control system; and H. Hayashi for helpful suggestions on the manuscript. Funding: This work was partially supported by JSPS KAKENHI Grants-in-Aid for Scientific Research C number 26462242. Author contributions: A.F. designed experiments, developed experimental programs, performed experiment 2, analyzed data, and wrote the manuscript. S.E. designed and developed the prosthetic hand and control circuit, designed experiments, and wrote the initial draft. K.N. and K.S. developed experimental programs and performed experiments 1 and 2. G.N. designed and performed experiment 2 and edited the manuscript. A.M. designed and developed the socket for the prosthetic hand. T.C. and T.T. directed the study and edited the manuscript. Competing interests: The authors declare that they have no competing financial interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials. Contact A.F. for source code and other materials.
View Abstract

Stay Connected to Science Robotics

Navigate This Article