23 research outputs found
Stable Electromyographic Sequence Prediction During Movement Transitions using Temporal Convolutional Networks
Transient muscle movements influence the temporal structure of myoelectric
signal patterns, often leading to unstable prediction behavior from
movement-pattern classification methods. We show that temporal convolutional
network sequential models leverage the myoelectric signal's history to discover
contextual temporal features that aid in correctly predicting movement
intentions, especially during interclass transitions. We demonstrate
myoelectric classification using temporal convolutional networks to effect 3
simultaneous hand and wrist degrees-of-freedom in an experiment involving nine
human-subjects. Temporal convolutional networks yield significant
performance improvements over other state-of-the-art methods in terms of both
classification accuracy and stability.Comment: 4 pages, 5 figures, accepted for Neural Engineering (NER) 2019
Conferenc
Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS
Recent studies have shown that speech can be reconstructed and synthesized using only brain activity recorded with intracranial electrodes, but until now this has only been done using retrospective analyses of recordings from able-bodied patients temporarily implanted with electrodes for epilepsy surgery. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a clinical trial participant (ClinicalTrials.gov, NCT03567213) with dysarthria due to amyotrophic lateral sclerosis (ALS). We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the user from a vocabulary of 6 keywords originally designed to allow intuitive selection of items on a communication board. Our results show for the first time that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words that are intelligible to human listeners while preserving the participants voice profile
Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS
Brain–computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant’s voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs
Stable Decoding from a Speech BCI Enables Control for an Individual with ALS without Recalibration for 3 Months
Brain-computer interfaces (BCIs) can be used to control assistive devices by patients with neurological disorders like amyotrophic lateral sclerosis (ALS) that limit speech and movement. For assistive control, it is desirable for BCI systems to be accurate and reliable, preferably with minimal setup time. In this study, a participant with severe dysarthria due to ALS operates computer applications with six intuitive speech commands via a chronic electrocorticographic (ECoG) implant over the ventral sensorimotor cortex. Speech commands are accurately detected and decoded (median accuracy: 90.59%) throughout a 3-month study period without model retraining or recalibration. Use of the BCI does not require exogenous timing cues, enabling the participant to issue self-paced commands at will. These results demonstrate that a chronically implanted ECoG-based speech BCI can reliably control assistive devices over long time periods with only initial model training and calibration, supporting the feasibility of unassisted home use
Coarse Electrocorticographic Decoding of Ipsilateral Reach in Patients with Brain Lesions
<div><p>In patients with unilateral upper limb paralysis from strokes and other brain lesions, strategies for functional recovery may eventually include brain-machine interfaces (BMIs) using control signals from residual sensorimotor systems in the damaged hemisphere. When voluntary movements of the contralateral limb are not possible due to brain pathology, initial training of such a BMI may require use of the unaffected ipsilateral limb. We conducted an offline investigation of the feasibility of decoding ipsilateral upper limb movements from electrocorticographic (ECoG) recordings in three patients with different lesions of sensorimotor systems associated with upper limb control. We found that the first principal component (PC) of unconstrained, naturalistic reaching movements of the upper limb could be decoded from ipsilateral ECoG using a linear model. ECoG signal features yielding the best decoding accuracy were different across subjects. Performance saturated with very few input features. Decoding performances of 0.77, 0.73, and 0.66 (median Pearson's <i>r</i> between the predicted and actual first PC of movement using nine signal features) were achieved in the three subjects. The performance achieved here with small numbers of electrodes and computationally simple decoding algorithms suggests that it may be possible to control a BMI using ECoG recorded from damaged sensorimotor brain systems.</p></div
Stability of ECoG high gamma signals during speech and implications for a speech BCI system in an individual with ALS: a year-long longitudinal study
OBJECTIVE: Speech brain-computer interfaces (BCIs) have the potential to augment communication in individuals with impaired speech due to muscle weakness, for example in ALS and other neurological disorders. However, to achieve long-term, reliable use of a speech BCI, it is essential for speech-related neural signal changes to be stable over long periods of time. Here we study, for the first time, the stability of speech-related electrocorticographic (ECoG) signals recorded from a chronically implanted ECoG BCI over a 12 month period. APPROACH: ECoG signals were recorded by an ECoG array implanted over the ventral sensorimotor cortex (vSMC) in a clinical trial participant with ALS. Because ECoG-based speech decoding has most often relied on broadband high gamma signal changes relative to baseline (non-speech) conditions, we studied longitudinal changes of high gamma band (HG) power at baseline and during speech, and we compared these with residual high frequency (HF) noise levels at baseline. Stability was further assessed by longitudinal measurements of signal-to-noise ratio (SNR), activation ratio (ActR), and peak speech-related HG response magnitude (HG response peaks). Lastly, we analyzed the stability of the event-related HG power changes (HG responses) for individual syllables at each electrode. MAIN RESULTS: We found that speech-related ECoG signal responses were stable over a range of syllables activating different articulators for the first year after implantation. SIGNIFICANCE: Together, our results indicate that ECoG can be a stable recording modality for long-term speech BCI systems for those living with severe paralysis. CLINICALTRIALS: gov, registration number NCT03567213
Decoding model performance across sessions for the first PC of movement.
<p>Distributions are displayed for the five cross-validations of the performance for one, two, and nine input features. S1–1 stands for subject 1 session 1, S1–2 stands for subject 1 session 2, etc. The maximum of the 1024 chance decoding attempts for all five folds with shuffled neural data is shown with an asterisk.</p
Presurgical MRI and brain reconstructions.
<p>Reconstructions are shown for subject 1 (first row) subject 2 (bottom left and middle) and subject 3 (bottom right). The previous resection margins anterior to the precentral gyrus in subject 1 are highlighted in green in the upper right. Superior oblique and top axial views of the reconstruction for subject 2 show the lesion from different viewpoints. Pre-surgical MRI (FLAIR) of subject 3 reveals a lesion of posterior left insula also involving left internal capsule.</p
Correlation between actual and decoded kinematics with and without PCA.
<p>Same methods were employed as with the first PC. The 9 best correlated neural features were selected for decoding in each dimension. The median of five-folds of correlation is displayed for each session. Bold denotes a statistically significant difference from chance results (<i>p</i><0.05, Bonferroni-corrected Wilcoxon).</p><p>Correlation between actual and decoded kinematics with and without PCA.</p
Subject Demographic and Clinical Information.
<p>Subject Demographic and Clinical Information.</p