4,253 research outputs found

    Future developments in brain-machine interface research

    Get PDF
    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition

    Improving Brain–Machine Interface Performance by Decoding Intended Future Movements

    Get PDF
    Objective. A brain–machine interface (BMI) records neural signals in real time from a subject\u27s brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject\u27s intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user\u27s intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user\u27s future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user\u27s future intent can compensate for the negative effect of control delay on BMI performance

    Brain–machine interface for eye movements

    Get PDF
    A number of studies in tetraplegic humans and healthy nonhuman primates (NHPs) have shown that neuronal activity from reach-related cortical areas can be used to predict reach intentions using brain–machine interfaces (BMIs) and therefore assist tetraplegic patients by controlling external devices (e.g., robotic limbs and computer cursors). However, to our knowledge, there have been no studies that have applied BMIs to eye movement areas to decode intended eye movements. In this study, we recorded the activity from populations of neurons from the lateral intraparietal area (LIP), a cortical node in the NHP saccade system. Eye movement plans were predicted in real time using Bayesian inference from small ensembles of LIP neurons without the animal making an eye movement. Learning, defined as an increase in the prediction accuracy, occurred at the level of neuronal ensembles, particularly for difficult predictions. Population learning had two components: an update of the parameters of the BMI based on its history and a change in the responses of individual neurons. These results provide strong evidence that the responses of neuronal ensembles can be shaped with respect to a cost function, here the prediction accuracy of the BMI. Furthermore, eye movement plans could be decoded without the animals emitting any actual eye movements and could be used to control the position of a cursor on a computer screen. These findings show that BMIs for eye movements are promising aids for assisting paralyzed patients

    Incorporating Feedback from Multiple Sensory Modalities Enhances Brain–Machine Interface Control

    Get PDF
    The brain typically uses a rich supply of feedback from multiple sensory modalities to control movement in healthy individuals. In many individuals, these afferent pathways, as well as their efferent counterparts, are compromised by disease or injury resulting in significant impairments and reduced quality of life. Brain–machine interfaces (BMIs) offer the promise of recovered functionality to these individuals by allowing them to control a device using their thoughts. Most current BMI implementations use visual feedback for closed-loop control; however, it has been suggested that the inclusion of additional feedback modalities may lead to improvements in control. We demonstrate for the first time that kinesthetic feedback can be used together with vision to significantly improve control of a cursor driven by neural activity of the primary motor cortex (MI). Using an exoskeletal robot, the monkey\u27s arm was moved to passively follow a cortically controlled visual cursor, thereby providing the monkey with kinesthetic information about the motion of the cursor. When visual and proprioceptive feedback were congruent, both the time to successfully reach a target decreased and the cursor paths became straighter, compared with incongruent feedback conditions. This enhanced performance was accompanied by a significant increase in the amount of movement-related information contained in the spiking activity of neurons in MI. These findings suggest that BMI control can be significantly improved in paralyzed patients with residual kinesthetic sense and provide the groundwork for augmenting cortically controlled BMIs with multiple forms of natural or surrogate sensory feedback

    State Variables of the Arm May Be Encoded by Single Neuron Activity in the Monkey Motor Cortex

    Get PDF
    Revealing the type of information encoded by neurons activity in the motor cortex is essential not only for understanding the mechanism of motion control but also for developing a brain-machine interface. Thus far, the concept of preferred direction vector (PD) has dominated the discussion regarding how neural activity encodes information; however, a unified view of exactly what information is encoded has not yet been established. In the present study, a model was constructed to describe temporal neuron activity by a dot product of the PD and the movement variables vector consisting of joint torque and angular velocity. The plausibility of this model was tested by comparing estimated neural activity with that recorded from the monkey motor cortex, and it was found that this model was able to explain the temporal pattern of neuron activity irrespective of its passive responsiveness. The mean determination coefficients of neurons that responded to proprioceptive stimuli and that responded to visual stimuli were relatively high values of 0.57 and 0.58, respectively. These results suggest that neurons in the monkey motor cortex encode state variables of the arm in a framework of modern control theory and that this information could be decoded for controlling a brain-machine interface

    Sensing with the Motor Cortex

    Get PDF
    The primary motor cortex is a critical node in the network of brain regions responsible for voluntary motor behavior. It has been less appreciated, however, that the motor cortex exhibits sensory responses in a variety of modalities including vision and somatosensation. We review current work that emphasizes the heterogeneity in sensorimotor responses in the motor cortex and focus on its implications for cortical control of movement as well as for brain-machine interface development

    Brain-machine interface using electrocorticography in humans

    Get PDF
    Paralysis has a severe impact on a patient’s quality of life and entails a high emotional burden and life-long social and financial costs. More than 5 million people in the USA suffer from some form of paralysis and about 50% of the people older than 65 experience difficulties or inabilities with movement. Restoring movement and communication for patients with neurological and motor disorders, stroke and spinal cord injuries remains a challenging clinical problem without an adequate solution. A brain-machine interface (BMI) allows subjects to control a device, such as a computer cursor or an artificial hand, exclusively by their brain activity. BMIs can be used to control communication and prosthetic devices, thereby restoring the communication and movement capabilities of the paralyzed patients. So far, most powerful BMIs have been realized by extracting movement parameters from the activity of single neurons. To record such activity, electrodes have to penetrate the brain tissue, thereby generating risk of brain injury. In addition, recording instability, due to small movements of the electrodes within the brain and the neuronal tissue response to the electrode implant, is also an issue. In this thesis, I investigate whether electrocorticography (ECoG), an alternative recording technique, can be used to achieve BMIs with similar accuracy. First, I demonstrate a BMI based on the approach of extracting movement parameters from ECoG signals. Such ECoG based BMI can further be improved using supervised adaptive algorithms. To implement such algorithms, it is necessary to continuously receive feedback from the subject whether the BMI-decoded trajectory was correct or incorrect. I show that, by using the same ECoG recordings, neuronal responses to trajectory errors can be recorded, detected and differentiated from other types of errors. Finally, I devise a method that could be used to improve the detection of error related neuronal responses

    Spintronic Nanodevices for Neuromorphic Sensing Chips

    Get PDF
    Recent developments in spintronics materials and physics are promising to develop a new type of magnetic sensors which can be embedded into the silicon chips. These neuromorphic sensing chips will be designed to capture the biomagnetic signals from active biological tissue exploited as brain-machine interface. They lead to machines that are able to sense and interact with the world in humanlike ways and able to accelerate years of fitful advance in artificial intelligence. To detect the weak biomagnetic signals, this work aims to develop a CMOS-compatible spintronic sensor based on the magnetoresistive (MR) effect. As an alternative to bulky superconducting quantum interference device (SQUID) systems, the miniaturised spintronic devices can be integrated with standard CMOS technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost. Fig. 1 shows the finite element method (FEM)-based simulation results of a Tunnelling-Magnetoresistive (TMR) sensor with an optimal structure in COMSOL Multiphysics. The finest geometry and material are demonstrated and compared with the state-of-the-art. The proposed TMR sensor achieves a linear response with a high TMR ratio of 172% and sensitivity of 223 ÎĽV/Oe. The results are promising for utilizing the TMR sensors in future miniaturized brain-machine interface, such as Magnetoencephalography (MEG) systems for neuromorphic sensing

    Brain Machine Interface for a Robotic Arm

    Get PDF
    The purpose of this project is to expand the capabilities of an existing interface of controlling a static robotic arm with brainwaves. Brainwaves are collected with an Emotiv EPOC headset. The Emotiv headset utilizes electroencephalography (EEG) to collect the brain signals. This project makes use of the Emotiv software suites to classify the thoughts of a subject as a specific action. The software then sends a keystroke to the robotic interface to control the robotic arm. The team is to identify actions for mapping, implement these chosen actions, and evaluate the system’s performance. The actions chosen and their implementation would also test the limits of the interface, and provide groundwork for future research. This semester, we are actively working on creating our own, independent signal processing system for analysis on subjects\u27 thought patterns.https://ecommons.udayton.edu/stander_posters/1651/thumbnail.jp
    • …
    corecore