321 research outputs found

    Unscented Kalman Filter for Brain-Machine Interfaces

    Get PDF
    Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation

    Brain-machine interface using electrocorticography in humans

    Get PDF
    Paralysis has a severe impact on a patient’s quality of life and entails a high emotional burden and life-long social and financial costs. More than 5 million people in the USA suffer from some form of paralysis and about 50% of the people older than 65 experience difficulties or inabilities with movement. Restoring movement and communication for patients with neurological and motor disorders, stroke and spinal cord injuries remains a challenging clinical problem without an adequate solution. A brain-machine interface (BMI) allows subjects to control a device, such as a computer cursor or an artificial hand, exclusively by their brain activity. BMIs can be used to control communication and prosthetic devices, thereby restoring the communication and movement capabilities of the paralyzed patients. So far, most powerful BMIs have been realized by extracting movement parameters from the activity of single neurons. To record such activity, electrodes have to penetrate the brain tissue, thereby generating risk of brain injury. In addition, recording instability, due to small movements of the electrodes within the brain and the neuronal tissue response to the electrode implant, is also an issue. In this thesis, I investigate whether electrocorticography (ECoG), an alternative recording technique, can be used to achieve BMIs with similar accuracy. First, I demonstrate a BMI based on the approach of extracting movement parameters from ECoG signals. Such ECoG based BMI can further be improved using supervised adaptive algorithms. To implement such algorithms, it is necessary to continuously receive feedback from the subject whether the BMI-decoded trajectory was correct or incorrect. I show that, by using the same ECoG recordings, neuronal responses to trajectory errors can be recorded, detected and differentiated from other types of errors. Finally, I devise a method that could be used to improve the detection of error related neuronal responses

    Using primary afferent neural activity for predicting limb kinematics in cat

    Get PDF
    Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, these methods did not make efficient use of the information embedded in the firing rates of the neural population. This dissertation proposes new methods for decoding limb kinematics from primary afferent firing rates. We present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of primary afferent neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. This thesis further explores the feasibility of decoding primary afferent firing rates in the presence of stimulation artifact generated during functional electrical stimulation. We show that kinematic information extracted from the firing rates of primary afferent neurons can be used in a 'real-time' application as a feedback for control of FES in a neuroprostheses. It provides methods for decoding primary afferent neurons and sets a foundation for further development of closed loop FES control of paralyzed extremities. Although a complete closed loop neuroprosthesis for natural behavior seems far away, the premise of this work argues that an interface at the dorsal root ganglia should be considered as a viable option

    Low-frequency local field potentials in primate motor cortex and their application to neural interfaces

    Get PDF
    PhD ThesisFor patients with spinal cord injury and paralysis, there are currently very limited options for clinical therapy. Brain-machine interfaces (BMIs) are neuroprosthetic devices that are being developed to record from the motor cortex in such patients, bypass the spinal lesion, and use decoded signals to control an effector, such as a prosthetic limb. The ideal BMI would be durable, reliable, totally predictable, fully-implantable, and have generous battery life. Current, state-of-the-art BMIs are limited in all of these domains; partly because the typical signals used—neuronal action potentials, or ‘spikes’—are very susceptible to micro-movement of recording electrodes. Recording spikes from the same neurons over many months is therefore difficult, and decoder behaviour may be unpredictable from day-today. Spikes also need to be digitized at high frequencies (~104 Hz) and heavily processed. As a result, devices are energy-hungry and difficult to miniaturise. Low-frequency local field potentials (lf-LFPs; < 5 Hz) are an alternative cortical signal. They are more stable and can be captured and processed at much lower frequencies (~101 Hz). Here we investigate rhythmical lf-LFP activity, related to the firing of local cortical neurons, during isometric wrist movements in Rhesus macaques. Multichannel spike-related slow potentials (SRSPs) can be used to accurately decode the firing rates of individual motor cortical neurons, and subjects can control a BMI task using this synthetic signal, as if they were controlling the actual firing rate. Lf-LFP–based firing rate estimates are stable over time – even once actual spike recordings have been lost. Furthermore, the dynamics of lf-LFPs are distinctive enough, that an unsupervised approach can be used to train a decoder to extract movement-related features for use in biofeedback BMIs. Novel electrode designs may help us optimise the recording of these signals, and facilitate progress towards a new generation of robust, implantable BMIs for patients.Research Studentship from the MRC, and Andy Jackson’s laboratory (hence this work) is supported by the Wellcome Trust

    Decoding motor intentions from human brain activity

    Get PDF
    “You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species. In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject\u27s upcoming movement can be decoded from preceding changes in neural activity? Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement. Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals. Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors

    Inferring single-trial neural population dynamics using sequential auto-encoders

    Get PDF
    Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics

    Network Modeling of Motor Pathways from Neural Recordings

    Get PDF
    During cued motor tasks, for both speech and limb movement, information propagates from primary sensory areas, to association areas, to primary and supplementary motor and language areas. Through the recent advent of high density recordings at multiple scales, it has become possible to simultaneously observe activity occurring from these disparate regions at varying resolution. Models of brain activity generally used in brain-computer interface (BCI) control do not take into account the global differences in recording site function, or the interactions between them. Through the use of connectivity measures, however, it has been made possible to determine the contribution of individual recording sites to the global activity, as they vary with task progression. This dissertation extends those connectivity models to provide summary information about the importance of individual sites. This is achieved through the application of network measures on the adjacency structure determined by connectivity measures. Similarly, by analyzing the coordinated activity of all of the electrode sites simultaneously during task performance, it is possible to elucidate discrete functional units through clustering analysis of the electrode recordings. In this dissertation, I first describe a BCI system using simple motor movement imagination at single recording sites. I then incorporate connectivity through the use of TV-DBN modeling on higher resolution electrode recordings, specifically electrocorticography (ECoG). I show that PageRank centrality reveals information about task progression and regional specificity which was obscured by direct application of the connectivity measures, due to the combinatorial increase in feature dimensionality. I then show that clustering of ECoG recordings using a method to determine the inherent cluster count algorithmically provides insight into how network involvement in task execution evolves, though in a manner dependent on grid coverage. Finally, I extend clustering analysis to show how individual neurons in motor cortex form distinct functional communities. These communities are shown to be task-specific, suggesting that neurons can form functional units with distinct neural populations across multiple recording sites in a context dependent impermanent manner. This work demonstrates that network measures of connectivity models of neurophysiological recordings are a rich source of information relevant to the field of neuroscience, as well as offering the promise of improved degree-of-freedom and naturalness possible through direct BCI control. These models are shown to be useful at multiple recording scales, from cortical-area level ECoG, to highly localized single unit microelectrode recordings

    Real-time brain-machine interface architectures : neural decoding from plan to movement

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 129-135).Brain-machine interfaces (BMI) aim to enable motor function in individuals with neurological injury or disease, by recording the neural activity, mapping or 'decoding' it into a motor command, and then controlling a device such as a computer interface or robotic arm. BMI research has largely focused on the problem of restoring the original motor function. The goal therefore has been to achieve a performance close to that of the healthy individual. There have been compelling proof of concept demonstrations of the utility of such BMIs in the past decade. However, performance of these systems needs to be significantly improved before they become clinically viable. Moreover, while developing high-performance BMIs with the goal of matching the original motor function is indeed valuable, a compelling goal is that of designing BMIs that can surpass original motor function. In this thesis, we first develop a novel real-time BMI for restoration of natural motor function. We then introduce a BMI architecture aimed at enhancing original motor function. We implement both our designs in rhesus monkeys. To facilitate the restoration of lost motor function, BMIs have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Moreover, both target and trajectory information are encoded in the motor cortical areas. These suggest that BMIs should be designed to combine these principal aspects of movement. We develop a novel two-stage BMI to decode jointly the target and trajectory of a reaching movement. First, we decode the intended target from neural spiking activity before movement initiation. Second, we combine the decoded target with the spiking activity during movement to estimate the trajectory. To do so, we use an optimal feedback-control design that aims to emulate the sensorimotor processing underlying actual motor control and directly processes the spiking activity using point process modeling in real time. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. This BMI also performs significantly better than linear regression approaches demonstrating the advantage of a design that more closely mimics the sensorimotor system.(cont.) While restoring the original motor function is indeed important, a compelling goal is the development of a truly "intelligent" BMI that can transcend such function by considering the higherlevel goal of the motor activity, and reformulating the motor plan accordingly. This would allow, for example, a task to be performed more quickly than possible by natural movement, or more efficiently than originally conceived. Since a typical motor activity consists of a sequence of planned movements, such a BMI must be capable of analyzing the complete sequence before action. As such its feasibility hinges fundamentally on whether all elements of the motor plan can be decoded concurrently from working memory. Here we demonstrate that such concurrent decoding is possible. In particular, we develop and implement a real-time BMI that accurately and simultaneously decodes in advance a sequence of planned movements from neural activity in the premotor cortex. In our experiments, monkeys were trained to add to working memory, in order, two distinct target locations on a screen, then move a cursor to each, in sequence. We find that the two elements of the motor plan, corresponding to the two targets, are encoded concurrently during the working memory period. Additionally, and interestingly, our results reveal: that the elements of the plan are encoded by largely disjoint subpopulations of neurons; that surprisingly small subpopulations are sufficient for reliable decoding of the motor plan; and that the subpopulation dedicated to the first target and their responses are largely unchanged when the second target is added to working memory, so that the process of adding information does not compromise the integrity of existing information. The results have significant implications for the architecture and design of future generations of BMIs with enhanced motor function capabilities.by Maryam Modir Shanechi.Ph.D

    A data-driven machine learning approach for brain-computer interfaces targeting lower limb neuroprosthetics

    Get PDF
    Prosthetic devices that replace a lost limb have become increasingly performant in recent years. Recent advances in both software and hardware allow for the decoding of electroencephalogram (EEG) signals to improve the control of active prostheses with brain-computer interfaces (BCI). Most BCI research is focused on the upper body. Although BCI research for the lower extremities has increased in recent years, there are still gaps in our knowledge of the neural patterns associated with lower limb movement. Therefore, the main objective of this study is to show the feasibility of decoding lower limb movements from EEG data recordings. The second aim is to investigate whether well-known neuroplastic adaptations in individuals with an amputation have an influence on decoding performance. To address this, we collected data from multiple individuals with lower limb amputation and a matched able-bodied control group. Using these data, we trained and evaluated common BCI methods that have already been proven effective for upper limb BCI. With an average test decoding accuracy of 84% for both groups, our results show that it is possible to discriminate different lower extremity movements using EEG data with good accuracy. There are no significant differences (p = 0.99) in the decoding performance of these movements between healthy subjects and subjects with lower extremity amputation. These results show the feasibility of using BCI for lower limb prosthesis control and indicate that decoding performance is not influenced by neuroplasticity-induced differences between the two groups
    corecore