351 research outputs found

    Algorithms for Neural Prosthetic Applications

    Get PDF
    abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    Limb-state information encoded by peripheral and central somatosensory neurons:Implications for an afferent interface

    Get PDF
    A major issue to be addressed in the development of neural interfaces for prosthetic control is the need for somatosensory feedback. Here, we investigate two possible strategies: electrical stimulation of either dorsal root ganglia (DRG) or primary somatosensory cortex (S1). In each approach, we must determine a model that reflects the representation of limb state in terms of neural discharge. This model can then be used to design stimuli that artificially activate the nervous system to convey information about limb state to the subject. Electrically activating DRG neurons using naturalistic stimulus patterns, modeled on recordings made during passive limb movement, evoked activity in S1 that was similar to that of the original movement. We also found that S1 neural populations could accurately discriminate different patterns of DRG stimulation across a wide range of stimulus pulse-rates. In studying the neural coding in S1, we also decoded the kinematics of active limb movement using multi-electrode recordings in the monkey. Neurons having both proprioceptive and cutaneous receptive fields contributed equally to this decoding. Some neurons were most informative of limb state in the recent past, but many others appeared to signal upcoming movements suggesting that they also were modulated by an efference copy signal. Finally, we show that a monkey was able to detect stimulation through a large percentage of electrodes implanted in area 2. We discuss the design of appropriate stimulus paradigms for conveying time-varying limb state information, and the relative merits and limitations of central and peripheral approaches

    Decoding Distributed Neuronal Activity in Extrastriate Cortical Areas for the Visual Prosthetic Applications

    Get PDF
    Les prothèses visuelles corticales sont planifiées pour restaurer la vision chez les individus non-voyants en appliquant du courant électrique à des sites discrets sur le cortex visuel. À ce jour, la qualité de la vision rapportée dans la littérature est celle d'un petit nombre de phosphènes (percept de spots lumineux spatialement localisés) sans organisation pour générer un percept significatif. Le principal défi consiste à développer des méthodes pour transférer les informations d'une scène visuelle dans un schéma de stimulation compréhensible pour le cerveau. Une connaissance clé pour résoudre ce défi est de comprendre comment les caractéristiques du phosphène (ou en général, les caractéristiques visuelles) sont représentées dans le modèle distribué d'activité neuronale. Une approche pour obtenir ces connaissances consiste à déterminer dans quelle mesure les réponses neuronales bien réparties peuvent détecter les changements dans une caractéristique visuelle spécifique des stimuli. Pour atteindre cet objectif, nous avons étudié la capacité de discrimination des zones corticales extrastriées V4 chez les singes macaques. Ces zones extrastriées ont de petites régions rétinotopiques qui offrent la possibilité d'échantillonner une grande région de l'espace visuel à l'aide de réseaux de microelectrodes standard telles que celles de l'Université d'Utah. Cela aide à construire des prothèses mini-invasives. Notre contribution concerne la résolution spatiale des potentiels de champs locaux (LFP) dans la zone V4 pour déterminer les limites de la capacité des prothèses visuelles à induire des phosphènes à plusieurs positions. Les LFP ont été utilisés car ils représentent une activité neuronale sur une échelle de 400 microns, ce qui est comparable à la propagation de l'effet de microstimulation dans le cortex. La zone visuelle extrastriée V4 contient également une carte rétinotopique de l'espace visuel et offre la possibilité de récupérer l'emplacement des stimuli statiques. Nous avons appliqué la méthode «Support vector machine» (SVM) pour déterminer la capacité des LFP (par rapport aux réponses à plusieurs unités - MUA) à discriminer les réponses (phosphènes) aux stimuli à différentes séparations spatiales. Nous avons constaté que malgré les grandes tailles de champs récepteurs dans V4, les réponses combinées de plusieurs sites étaient capables de discrimination fine et grossière des positions. Nous avons proposé une stratégie de sélection des électrodes basée sur les poids linéaires des décodeurs (en utilisant les valeurs de poids les plus élevées) qui a considérablement réduit le nombre d'électrodes requis pour la discrimination avec une augmentation des performances. L'application de cette stratégie présente l'avantage potentiel de réduire les dommages tissulaires dans les applications réelles. Nous avons conclu que pour un fonctionnement correct des prothèses, la microstimulation électrique devrait générer un schéma d'activité neuronale similaire à l'activité évoquée correspondant à un percept attendu. De plus, lors de la conception d'une prothèse visuelle, les limites de la capacité de discrimination des zones cérébrales implantées doivent être prises en compte. Ces limites peuvent différer pour MUA et LFP.----------ABSTRACT Cortical visual prostheses are intended to restore vision to blind individuals by applying a pattern of electrical currents at discrete sites on the visual cortex. To date, the quality of vision reported in the literature is that of a small number of phosphenes (percept of spatially localized spots of light) with no organization to generate a meaningful percept. The main challenge consists of developing methods to transfer information of a visual scene into a pattern of stimulation that is understandable to the brain. The key to solving this challenge is understanding how phosphene characteristics (or in general, visual characteristics) are represented in a distributed pattern of neural activity. One approach is to determine how well neural responses can detect changes in a specific characteristic of stimuli. To this end, we have studied the discrimination capability of V4 extrastriate cortical area in macaque monkeys. Extrastriate cortical areas have small retinotopic maps that can provide an opportunity to sample a large region of visual space using standard devices such as Utah arrays. Thus, this helps to build minimally invasive prosthetic devices. Our contribution relates to the spatial resolution of local field potentials (LFPs) in area V4 to determine the limits in the capability of visual prosthetic devices in generation of phosphenes in multiple positions. LFPs were used because they represent neural activity over a scale of 400 microns, which is comparable to the spread of microstimulation effects in the cortex. Extrastriate visual area V4 also contains a retinotopic map of visual space and offers an opportunity to recover the location of static stimuli. We applied support vector machines (SVM) to determine the capability of LFPs (compared to multi-unit responses) in discriminating responses to phosphene-like stimuli (probes) located with different spatial separations. We found that despite large receptive field sizes in V4, combined responses from multiple sites were capable of fine and coarse discrimination of positions. We proposed an electrode selection strategy based on the linear weights of the decoder (using the highest weight values) that significantly reduced the number of electrodes required for discrimination, while at the same time, increased performance. Applying this strategy has the potential to reduce tissue damages in real applications. We concluded that for the correct operation of prosthetic devices, electrical microstimulation should generate a pattern of neural activity similar to the evoked activity corresponding to an expected percept. Moreover, in the design of visual prosthesis, limits in the discrimination capability of the implanted brain areas should be taken into account. These limits may differ for MUA and LFP

    Using primary afferent neural activity for predicting limb kinematics in cat

    Get PDF
    Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, these methods did not make efficient use of the information embedded in the firing rates of the neural population. This dissertation proposes new methods for decoding limb kinematics from primary afferent firing rates. We present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of primary afferent neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. This thesis further explores the feasibility of decoding primary afferent firing rates in the presence of stimulation artifact generated during functional electrical stimulation. We show that kinematic information extracted from the firing rates of primary afferent neurons can be used in a 'real-time' application as a feedback for control of FES in a neuroprostheses. It provides methods for decoding primary afferent neurons and sets a foundation for further development of closed loop FES control of paralyzed extremities. Although a complete closed loop neuroprosthesis for natural behavior seems far away, the premise of this work argues that an interface at the dorsal root ganglia should be considered as a viable option

    Machine Learning Methods for Image Analysis in Medical Applications, from Alzheimer\u27s Disease, Brain Tumors, to Assisted Living

    Get PDF
    Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer\u27s disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications

    Electroencephalography (EEG)-based Brain-Computer Interfaces

    Get PDF
    International audienceBrain-Computer Interfaces (BCI) are systems that can translate the brain activity patterns of a user into messages or commands for an interactive application. The brain activity which is processed by the BCI systems is usually measured using Electroencephalography (EEG). In this article, we aim at providing an accessible and up-to-date overview of EEG-based BCI, with a main focus on its engineering aspects. We notably introduce some basic neuroscience background, and explain how to design an EEG-based BCI, in particular reviewing which signal processing, machine learning, software and hardware tools to use. We present Brain Computer Interface applications, highlight some limitations of current systems and suggest some perspectives for the field

    Bayesian machine learning applied in a brain-computer interface for disabled users

    Get PDF
    A brain-computer interface (BCI) is a system that enables control of devices or communication with other persons, only through cerebral activity, without using muscles. The main application for BCIs is assistive technology for disabled persons. Examples for devices that can be controlled by BCIs are artificial limbs, spelling devices, or environment control systems. BCI research has seen renewed interest in recent years, and it has been convincingly shown that communication via a BCI is in principle feasible. However, present day systems still have shortcomings that prevent their widespread application. In part, these shortcomings are caused by limitations in the functionality of the pattern recognition algorithms used for discriminating brain signals in BCIs. Moreover, BCIs are often tested exclusively with able-bodied persons instead of conducting tests with the target user group, namely disabled persons. The goal of this thesis is to extend the functionality of pattern recognition algorithms for BCI systems and to move towards systems that are helpful for disabled users. We discuss extensions of linear discriminant analysis (LDA), which is a simple but efficient method for pattern recognition. In particular, a framework from Bayesian machine learning, the so-called evidence framework, is applied to LDA. An algorithm is obtained that learns classifiers quickly, robustly, and fully automatically. An extension of this algorithm allows to automatically reduce the number of sensors needed for acquisition of brain signals. More specifically, the algorithm allows to perform electrode selection. The algorithm for electrode selection is based on a concept known as automatic relevance determination (ARD) in Bayesian machine learning. The last part of the algorithmic development in this thesis concerns methods for computing accurate estimates of class probabilities in LDA-like classifiers. These probabilities are used to build a BCI that dynamically adapts the amount of acquired data, so that a preset, approximate bound on the probability of misclassifications is not exceeded. To test the algorithms described in this thesis, a BCI specifically tailored for disabled persons is introduced. The system uses electroencephalogram (EEG) signals and is based on the P300 evoked potential. Datasets recorded from five disabled and four able-bodied subjects are used to show that the Bayesian version of LDA outperforms plain LDA in terms of classification accuracy. Also, the impact of different static electrode configurations on classification accuracy is tested. In addition, experiments with the same datasets demonstrate that the algorithm for electrode selection is computationally efficient, yields physiologically plausible results, and improves classification accuracy over static electrode configurations. The classification accuracy is further improved by dynamically adapting the amount of acquired data. Besides the datasets recorded from disabled and able-bodied subjects, benchmark datasets from BCI competitions are used to show that the algorithms discussed in this thesis are competitive with state-of-the-art electroencephalogram (EEG) classification algorithms. While the experiments in this thesis are uniquely performed with P300 datasets, the presented algorithms might also be useful for other types of BCI systems based on the EEG. This is the case because functionalities such as robust and automatic computation of classifiers, electrode selection, and estimation of class probabilities are useful in many BCI systems. Seen from a more general point of view, many applications that rely on the classification of cerebral activity could possibly benefit from the methods developed in this thesis. Among the potential applications are interrogative polygraphy ("lie detection") and clinical applications, for example coma outcome prognosis and depth of anesthesia monitoring
    • …
    corecore