380 research outputs found

    Detecting Limb Movements by Reading Minds

    Get PDF
    Ved hjelp av EEG kan elektrisk aktivitet på hodebunnen til mennesker brukes til å kontrollere en datamaskin. I senere år har maskinlæringsteknikker gjort slike systemer nøyaktigere og istand til å tilpasse seg individet. I oppgaven implementeres "Sub-Band Common Spatial Patterns"-metoden for EEG klassifisering. Denne ble så forsøkt utvidet på forskjellige måter, med den hensikt å øke nøyaktigheten. Det ble oppdaget at nøyaktighet kan økes ved å: 1) Regularisere estimatet av kovariansematrisen for Common Spatial Patterns algoritmen. 2) Å legge til utviklingen av signalstyrken over tid som innputt til klassifisatoren. 3) Å bruke L1-regularisert Logistic Regression som klassifisator, og for å eliminere features til klassifisatoren. 4) Å bruke boosting på den endelige klassifisatoren. Til sammen ga disse endringene en økning fra 86.4% til 91.7% nøyaktighet på det offentlig tilgjengelige datasettet BCI Competition III (IVa)

    Study of Adaptation Methods Towards Advanced Brain-computer Interfaces

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Machine learning based brain signal decoding for intelligent adaptive deep brain stimulation

    Get PDF
    Sensing enabled implantable devices and next-generation neurotechnology allow real-time adjustments of invasive neuromodulation. The identification of symptom and disease-specific biomarkers in invasive brain signal recordings has inspired the idea of demand dependent adaptive deep brain stimulation (aDBS). Expanding the clinical utility of aDBS with machine learning may hold the potential for the next breakthrough in the therapeutic success of clinical brain computer interfaces. To this end, sophisticated machine learning algorithms optimized for decoding of brain states from neural time-series must be developed. To support this venture, this review summarizes the current state of machine learning studies for invasive neurophysiology. After a brief introduction to the machine learning terminology, the transformation of brain recordings into meaningful features for decoding of symptoms and behavior is described. Commonly used machine learning models are explained and analyzed from the perspective of utility for aDBS. This is followed by a critical review on good practices for training and testing to ensure conceptual and practical generalizability for real-time adaptation in clinical settings. Finally, first studies combining machine learning with aDBS are highlighted. This review takes a glimpse into the promising future of intelligent adaptive DBS (iDBS) and concludes by identifying four key ingredients on the road for successful clinical adoption: i) multidisciplinary research teams, ii) publicly available datasets, iii) open-source algorithmic solutions and iv) strong world-wide research collaborations.Fil: Merk, Timon. Charité – Universitätsmedizin Berlin; AlemaniaFil: Peterson, Victoria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; Argentina. Harvard Medical School; Estados UnidosFil: Köhler, Richard. Charité – Universitätsmedizin Berlin; AlemaniaFil: Haufe, Stefan. Charité – Universitätsmedizin Berlin; AlemaniaFil: Richardson, R. Mark. Harvard Medical School; Estados UnidosFil: Neumann, Wolf Julian. Charité – Universitätsmedizin Berlin; Alemani

    Data-Driven Transducer Design and Identification for Internally-Paced Motor Brain Computer Interfaces: A Review

    Get PDF
    Brain-Computer Interfaces (BCIs) are systems that establish a direct communication pathway between the users' brain activity and external effectors. They offer the potential to improve the quality of life of motor-impaired patients. Motor BCIs aim to permit severely motor-impaired users to regain limb mobility by controlling orthoses or prostheses. In particular, motor BCI systems benefit patients if the decoded actions reflect the users' intentions with an accuracy that enables them to efficiently interact with their environment. One of the main challenges of BCI systems is to adapt the BCI's signal translation blocks to the user to reach a high decoding accuracy. This paper will review the literature of data-driven and user-specific transducer design and identification approaches and it focuses on internally-paced motor BCIs. In particular, continuous kinematic biomimetic and mental-task decoders are reviewed. Furthermore, static and dynamic decoding approaches, linear and non-linear decoding, offline and real-time identification algorithms are considered. The current progress and challenges related to the design of clinical-compatible motor BCI transducers are additionally discussed

    Modelling and Classification of Motor Imagery EEG for BCI

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data Augmentation for Deep-Learning-Based Electroencephalography

    Get PDF
    Background: Data augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc. New method: We review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected? Results: DA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively. Comparing with existing methods: Percent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average. Conclusions: DA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis

    Data Augmentation for Deep-Learning-Based Electroencephalography

    Get PDF
    Background: Data augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc. New method: We review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected? Results: DA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively. Comparing with existing methods: Percent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average. Conclusions: DA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis

    Toward an Imagined Speech-Based Brain Computer Interface Using EEG Signals

    Get PDF
    Individuals with physical disabilities face difficulties in communication. A number of neuromuscular impairments could limit people from using available communication aids, because such aids require some degree of muscle movement. This makes brain–computer interfaces (BCIs) a potentially promising alternative communication technology for these people. Electroencephalographic (EEG) signals are commonly used in BCI systems to capture non-invasively the neural representations of intended, internal and imagined activities that are not physically or verbally evident. Examples include motor and speech imagery activities. Since 2006, researchers have become increasingly interested in classifying different types of imagined speech from EEG signals. However, the field still has a limited understanding of several issues, including experiment design, stimulus type, training, calibration and the examined features. The main aim of the research in this thesis is to advance automatic recognition of imagined speech using EEG signals by addressing a variety of issues that have not been solved in previous studies. These include (1)improving the discrimination between imagined speech versus non-speech tasks, (2) examining temporal parameters to optimise the recognition of imagined words and (3) providing a new feature extraction framework for improving EEG-based imagined speech recognition by considering temporal information after reducing within-session temporal non-stationarities. For the discrimination of speech versus non-speech, EEG data was collected during the imagination of randomly presented and semantically varying words. The non-speech tasks involved attention to visual stimuli and resting. Time-domain and spatio-spectral features were examined in different time intervals. Above-chance-level classification accuracies were achieved for each word and for groups of words compared to the non-speech tasks. To classify imagined words, EEG data related to the imagination of five words was collected. In addition to words classification, the impacts of experimental parameters on classification accuracy were examined. The optimization of these parameters is important to improve the rate and speed of recognizing unspoken speech in on-line applications. These parameters included using different training sizes, classification algorithms, feature extraction in different time intervals and the use of imagination time length as classification feature. Our extensive results showed that Random Forest classifier with features extracted using Discrete Wavelet Transform from 4 seconds fixed time frame EEG yielded that highest average classification of 87.93% in classification of five imagined words. To minimise within class temporal variations, a novel feature extraction framework based on dynamic time warping (DTW) was developed. Using linear discriminant analysis as the classifier, the proposed framework yielded an average 72.02% accuracy in the classification of imagined speech versus silence and 52.5% accuracy in the classification of five words. These results significantly outperformed a baseline configuration of state-of-the art time-domain features
    corecore