230 research outputs found

    Algorithms for Neural Prosthetic Applications

    Get PDF
    abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity

    Full text link
    The effectiveness of today's human-machine interaction is limited by a communication bottleneck as operators are required to translate high-level concepts into a machine-mandated sequence of instructions. In contrast, we demonstrate effective, goal-oriented control of a computer system without any form of explicit communication from the human operator. Instead, the system generated the necessary input itself, based on real-time analysis of brain activity. Specific brain responses were evoked by violating the operators' expectations to varying degrees. The evoked brain activity demonstrated detectable differences reflecting congruency with or deviations from the operators' expectations. Real-time analysis of this activity was used to build a user model of those expectations, thus representing the optimal (expected) state as perceived by the operator. Based on this model, which was continuously updated, the computer automatically adapted itself to the expectations of its operator. Further analyses showed this evoked activity to originate from the medial prefrontal cortex and to exhibit a linear correspondence to the degree of expectation violation. These findings extend our understanding of human predictive coding and provide evidence that the information used to generate the user model is task-specific and reflects goal congruency. This paper demonstrates a form of interaction without any explicit input by the operator, enabling computer systems to become neuroadaptive, that is, to automatically adapt to specific aspects of their operator'smindset. Neuroadaptive technology significantlywidens the communication bottleneck and has the potential to fundamentally change the way we interact with technology

    Brain-machine interface using electrocorticography in humans

    Get PDF
    Paralysis has a severe impact on a patient’s quality of life and entails a high emotional burden and life-long social and financial costs. More than 5 million people in the USA suffer from some form of paralysis and about 50% of the people older than 65 experience difficulties or inabilities with movement. Restoring movement and communication for patients with neurological and motor disorders, stroke and spinal cord injuries remains a challenging clinical problem without an adequate solution. A brain-machine interface (BMI) allows subjects to control a device, such as a computer cursor or an artificial hand, exclusively by their brain activity. BMIs can be used to control communication and prosthetic devices, thereby restoring the communication and movement capabilities of the paralyzed patients. So far, most powerful BMIs have been realized by extracting movement parameters from the activity of single neurons. To record such activity, electrodes have to penetrate the brain tissue, thereby generating risk of brain injury. In addition, recording instability, due to small movements of the electrodes within the brain and the neuronal tissue response to the electrode implant, is also an issue. In this thesis, I investigate whether electrocorticography (ECoG), an alternative recording technique, can be used to achieve BMIs with similar accuracy. First, I demonstrate a BMI based on the approach of extracting movement parameters from ECoG signals. Such ECoG based BMI can further be improved using supervised adaptive algorithms. To implement such algorithms, it is necessary to continuously receive feedback from the subject whether the BMI-decoded trajectory was correct or incorrect. I show that, by using the same ECoG recordings, neuronal responses to trajectory errors can be recorded, detected and differentiated from other types of errors. Finally, I devise a method that could be used to improve the detection of error related neuronal responses

    Channel Selection and Feature Projection for Cognitive Load Estimation Using Ambulatory EEG

    Get PDF
    We present an ambulatory cognitive state classification system to assess the subject's mental load based on EEG measurements. The ambulatory cognitive state estimator is utilized in the context of a real-time augmented cognition (AugCog) system that aims to enhance the cognitive performance of a human user through computer-mediated assistance based on assessments of cognitive states using physiological signals including, but not limited to, EEG. This paper focuses particularly on the offline channel selection and feature projection phases of the design and aims to present mutual-information-based techniques that use a simple sample estimator for this quantity. Analyses conducted on data collected from 3 subjects performing 2 tasks (n-back/Larson) at 2 difficulty levels (low/high) demonstrate that the proposed mutual-information-based dimensionality reduction scheme can achieve up to 94% cognitive load estimation accuracy

    A maximum margin dynamic model with its application to brain signal analysis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Development of an Electroencephalography-Based Brain-Computer Interface Supporting Two-Dimensional Cursor Control

    Get PDF
    This study aims to explore whether human intentions to move or cease to move right and left hands can be decoded from spatiotemporal features in non-invasive electroencephalography (EEG) in order to control a discrete two-dimensional cursor movement for a potential multi-dimensional Brain-Computer interface (BCI). Five naïve subjects performed either sustaining or stopping a motor task with time locking to a predefined time window by using motor execution with physical movement or motor imagery. Spatial filtering, temporal filtering, feature selection and classification methods were explored. The performance of the proposed BCI was evaluated by both offline classification and online two-dimensional cursor control. Event-related desynchronization (ERD) and post-movement event-related synchronization (ERS) were observed on the contralateral hemisphere to the hand moved for both motor execution and motor imagery. Feature analysis showed that EEG beta band activity in the contralateral hemisphere over the motor cortex provided the best detection of either sustained or ceased movement of the right or left hand. The offline classification of four motor tasks (sustain or cease to move right or left hand) provided 10-fold cross-validation accuracy as high as 88% for motor execution and 73% for motor imagery. The subjects participating in experiments with physical movement were able to complete the online game with motor execution at the average accuracy of 85.5±4.65%; Subjects participating in motor imagery study also completed the game successfully. The proposed BCI provides a new practical multi-dimensional method by noninvasive EEG signal associated with human natural behavior, which does not need long-term training

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands

    Errare machinale est: The use of error-related potentials in brain-machine interfaces

    Get PDF
    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments towards this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches
    corecore