Research ArticleHUMAN-ROBOT INTERACTION

Noninvasive neuroimaging enhances continuous neural tracking for robotic device control

See allHide authors and affiliations

Science Robotics  19 Jun 2019:
Vol. 4, Issue 31, eaaw6844
DOI: 10.1126/scirobotics.aaw6844
  • Fig. 1 Source-based CP BCI robotic arm framework.

    The proposed framework addressed both user and machine learning aspects of BCI technology before being implemented in the control of a realistic robotic device. User learning was addressed by investigating the behavioral and physiological effects of BCI training using sensor-level neurofeedback with a traditional DT center-out task (n = 11) and a more realistic CP task (n = 11) (top left). The effects of BCI training were further tested in the CP task using source-level neurofeedback (n = 11) obtained through online ESI with user-specific anatomical models (center). This design allowed us to determine both the optimal task and neurofeedback domain for BCI skill acquisition. The machine learning aspect was further examined across the skill spectrum by testing the effects of source-level neurofeedback, compared with sensor-level neurofeedback, in naïve (n = 13) and experienced (n = 16) users in a randomized single-blinded design (top right). The user and machine learning components of the proposed framework were then combined to achieve real-time continuous source-based control of a robotic arm (n = 6) (bottom). Comparing BCI performance of robotic arm and virtual cursor control demonstrated the ease of translating neural control of a virtual object to a realistic assistive device useful for clinical applications.

  • Fig. 2 BCI performance and user engagement.

    (A) Depiction of the CP edge wrapping feature. (B) Tracking trajectory during an example 2D CP trial. (C) Training feature maps for the DT and CP training groups for horizontal (top) and vertical (bottom) cursor control. ρ2, squared correlation coefficient. (D and E) 2D BCI performance for the CP (D) and DT (E) task at baseline and evaluation for the CP and DT training groups. The red dotted line indicates chance level. The effect size, ∣r∣, is indicated under each pair of bars. (F) Task learning for the CP (top) and DT (bottom) tasks. (G) Eye blink EEG component scalp topography (top) and activity (bottom left) at baseline and evaluation, and activity during each task (CP versus DT) (bottom right). Bars indicate mean + SEM. Statistical analysis using a one- (F) or two-way repeated-measures (D, E, and G) ANOVA (n = 11 per group) with main effects of task, and time and task, respectively. Main effect of time: #P < 0.05, ###P < 0.005. Tukey’s HSD post hoc test: *P < 0.05, ***P < 0.005.

  • Fig. 3 Electrophysiological learning effects.

    (A and B) Left versus right MI task analysis. (A) Maximum sensorimotor R2 value for the CP and DT training groups for horizontal control task. The effect size, ∣r∣, is indicated under each pair of bars. (B) R2 topographies at baseline (top row) and evaluation (bottom row) for the CP and DT training groups for horizontal control tasks. (C and D) Both hands versus rest MI task analysis. Same as (A) and (B) for vertical control tasks. (E and F) Statistical topographies indicating electrodes that displayed a significant increase in R2 values for the horizontal (E) and vertical (F) control tasks. The electrode map in the middle provides a reference for the electrodes shown. Bar graphs below each topography provide a count for the number of electrodes meeting the various significance thresholds. Bars indicate mean + SEM. Statistical analysis using a one- (E and F) or two-way repeated-measures (A and C) ANOVA (n = 11 per group) with main effects of time [blue, P < 0.05; green, P < 0.01; yellow, P < 0.005; red outline, P < 0.05, false discovery rate (FDR) corrected] and time (#P < 0.05, ###P < 0.005) and training task, respectively. Tukey’s HSD post hoc test: *P < 0.05.

  • Fig. 4 Source-level neurofeedback.

    (A and B) 2D BCI performance for the CP (A) and DT (B) task at baseline and evaluation for the CP and source CP (sCP) training groups. The red dotted line indicates chance level. The effect size, ∣r∣, is indicated under each pair of bars. (C) Task learning for the CP (left) and DT (right) tasks. Bars indicate mean + SEM. Statistical analysis using a one- (C) or two-way repeated-measures (A and B) ANOVA (n = 11 per group) with main effects of training decoding domain, and time and training decoding domain, respectively. Main effect of time: #P < 0.05, ###P < 0.005. Tukey’s HSD post hoc test: *P < 0.05, ***P < 0.005. n.s., not significant. (D) Group-level training feature maps for the training groups for horizontal (top) and vertical (bottom) cursor control. User-specific features were projected onto a template brain for group averaging.

  • Fig. 5 Online 2D CP source versus sensor BCI performance.

    (A and B) Experienced user performance (n = 16). (A) Group-level MSE for source and sensor 2D CP cursor control. Light and dark gray blocks represent performance for the CP training group (n = 11; Fig. 2D) before (naïve) and after training (experienced). The effect size, ∣r∣, is indicated under the pair of bars. (B) Group-level squared-error histograms for 2D CP sensor and source cursor control. (C and D) Naïve user performance (n = 13). Same as (A) and (B) for naïve user data. (E) Scale drawing of the CP paradigm workspace displaying the spatial threshold derived from experienced (yellow) and naïve (green) user data (fig. S6). (F) Cursor dwell time within the spatial threshold for experienced (left) and naïve (right) users. (G) Group-level feature maps for horizontal (top) and vertical (bottom) cursor control for naïve (right) and experienced (left) users. User-specific features were projected onto a template brain for group averaging. (H) Feature spread analysis between experienced and naïve users for source (left) and sensor (right) features for horizontal (top) and vertical (bottom) control. Bars indicate mean + SEM. Statistical analysis using a one- (C and D) or two-way repeated-measures (A and B) ANOVA with main effects of decoding domain, and time and decoding domain, respectively. Main effect of decoding domain: ###P < 0.005 (A, C, and F), gray bar; P < 0.05 uncorrected, red bar; P < 0.05, FDR corrected (B and D). Mann-Whitney U test with Bonferroni correction for multiple comparisons (H): +P < 0.05, +++P < 0.005.

  • Fig. 6 Source-based CP BCI robotic arm control.

    (A) Robotic arm CP BCI setup. Users controlled the 2D continuous movement of a 7–degree of freedom robotic arm to track a randomly moving target on a computer screen. (B) Depiction of the CP edge repulsion feature (in contrast to the edge wrapping feature; Fig. 2A) used to accommodate the physical limitations of the robotic arm. (C) Group-level feature maps for the horizontal (top row) and vertical (bottom row) control dimensions projected onto a template brain. (D) Group-level 2D MSE for the various control conditions. Bars indicate mean + SEM. (E) Box-and-whisker plots for the group-level squared tracking correlation (ρ2) values for the horizontal (left) and vertical (right) dimensions during 2D CP control for the various control conditions. Blue lines indicate the medians, tops and bottoms of the boxes indicate the 25th and 75 percentiles, and the top and bottom whiskers indicate the respective minimum and maximum values. Control conditions include virtual cursor (white), hidden cursor (gray), and robotic arm (black). The red dotted line indicates chance level. Statistical analysis using a repeated-measures two-way ANOVA (n = 6 per condition) with main effects of time and control condition.

Supplementary Materials

  • robotics.sciencemag.org/cgi/content/full/4/31/eaaw6844/DC1

    Fig. S1. Example CP trajectories.

    Fig. S2. Squared tracking correlation histograms.

    Fig. S3. CP versus DT BCI learning.

    Fig. S4. Influence of eye activity on BCI control.

    Fig. S5. Source versus sensor BCI learning.

    Fig. S6. 2D CP source versus sensor spatial threshold.

    Fig. S7. Online 1D horizontal CP source versus sensor BCI performance.

    Fig. S8. Online 1D vertical CP source versus sensor BCI performance.

    Fig. S9. Offline source versus sensor sensorimotor modulation.

    Table S1. Absolute effect sizes for source-based versus sensor-based CP control.

    Table S2. Source-level sensorimotor ROI anatomical structures.

    Movie S1. 1D horizontal CP (unconstrained) BCI virtual cursor control example trial.

    Movie S2. 1D vertical CP (unconstrained) BCI virtual cursor control example trial.

    Movie S3. 2D CP (unconstrained) BCI virtual cursor control example trial.

    Movie S4. 1D horizontal CP (physically constrained) BCI robotic arm control example trial.

    Movie S5. 1D vertical CP (physically constrained) BCI robotic arm control example trial.

    Movie S6. 2D CP (physically constrained) BCI robotic arm control example trial.

    Movie S7. 2D CP (physically constrained) BCI virtual cursor control example trial.

  • Supplementary Materials

    The PDF file includes:

    • Fig. S1. Example CP trajectories.
    • Fig. S2. Squared tracking correlation histograms.
    • Fig. S3. CP versus DT BCI learning.
    • Fig. S4. Influence of eye activity on BCI control.
    • Fig. S5. Source versus sensor BCI learning.
    • Fig. S6. 2D CP source versus sensor spatial threshold.
    • Fig. S7. Online 1D horizontal CP source versus sensor BCI performance.
    • Fig. S8. Online 1D vertical CP source versus sensor BCI performance.
    • Fig. S9. Offline source versus sensor sensorimotor modulation.
    • Table S1. Absolute effect sizes for source-based versus sensor-based CP control.
    • Table S2. Source-level sensorimotor ROI anatomical structures.
    • Legends for movies S1 to S7

    Download PDF

    Other Supplementary Material for this manuscript includes the following:

    • Movie S1 (.mp4 format). 1D horizontal CP (unconstrained) BCI virtual cursor control example trial.
    • Movie S2 (.mp4 format). 1D vertical CP (unconstrained) BCI virtual cursor control example trial.
    • Movie S3 (.mp4 format). 2D CP (unconstrained) BCI virtual cursor control example trial.
    • Movie S4 (.mp4 format). 1D horizontal CP (physically constrained) BCI robotic arm control example trial.
    • Movie S5 (.mp4 format). 1D vertical CP (physically constrained) BCI robotic arm control example trial.
    • Movie S6 (.mp4 format). 2D CP (physically constrained) BCI robotic arm control example trial.
    • Movie S7 (.mp4 format). 2D CP (physically constrained) BCI virtual cursor control example trial.

    Files in this Data Supplement:

Stay Connected to Science Robotics

Navigate This Article