1,343 research outputs found

    Applying evolution strategies to preprocessing EEG signals for brain–computer interfaces

    Get PDF
    An appropriate preprocessing of EEG signals is crucial to get high classification accuracy for Brain–Computer Interfaces (BCI). The raw EEG data are continuous signals in the time-domain that can be transformed by means of filters. Among them, spatial filters and selecting the most appropriate frequency-bands in the frequency domain are known to improve classification accuracy. However, because of the high variability among users, the filters must be properly adjusted to every user’s data before competitive results can be obtained. In this paper we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for automatically tuning the filters. Spatial and frequency-selection filters are evolved to minimize both classification error and the number of frequency bands used. This evolutionary approach to filter optimization has been tested on data for different users from the BCI-III competition. The evolved filters provide higher accuracy than approaches used in the competition. Results are also consistent across different runs of CMA-ES.This work has been funded by the Spanish Ministry of Science under Contract TIN2008-06491-C04-03 (MSTAR project) and TIN2011-28336 (MOVES project).Publicad

    Object Segmentation in Images using EEG Signals

    Get PDF
    This paper explores the potential of brain-computer interfaces in segmenting objects from images. Our approach is centered around designing an effective method for displaying the image parts to the users such that they generate measurable brain reactions. When an image region, specifically a block of pixels, is displayed we estimate the probability of the block containing the object of interest using a score based on EEG activity. After several such blocks are displayed, the resulting probability map is binarized and combined with the GrabCut algorithm to segment the image into object and background regions. This study shows that BCI and simple EEG analysis are useful in locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the paper accepted to the 22nd ACM International Conference on Multimedia (November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward session. 10 page

    A study on temporal segmentation strategies for extracting common spatial patterns for brain computer interfacing

    Get PDF
    Brain computer interfaces (BCI) create a new approach to human computer communication, allowing the user to control a system simply by performing mental tasks such as motor imagery. This paper proposes and analyses different strategies for time segmentation in extracting common spatial patterns of the brain signals associated to these tasks leading to an improvement of BCI performance

    Evolving spatial and frequency selection filters for brain-computer interfaces

    Get PDF
    Proceeding of: 2010 IEEE World Congress in Computational Intelligence (WCCI 2010), Barcelona, Spain, July 18-23, 2010Abstract—Machine Learning techniques are routinely applied to Brain Computer Interfaces in order to learn a classifier for a particular user. However, research has shown that classiffication techniques perform better if the EEG signal is previously preprocessed to provide high quality attributes to the classifier. Spatial and frequency-selection filters can be applied for this purpose. In this paper, we propose to automatically optimize these filters by means of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The technique has been tested on data from the BCI-III competition, because both raw and manually filtered datasets were supplied, allowing to compare them. Results show that the CMA-ES is able to obtain higher accuracies than the datasets preprocessed by manually tuned filters.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model

    Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Full text link
    Objective. The main goal of this work is to develop a model for multi-sensor signals such as MEG or EEG signals, that accounts for the inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI type experiments. Approach. The method involves linear mixed effects statistical model, wavelet transform and spatial filtering, and aims at the characterization of localized discriminant features in multi-sensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e. discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data, in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves on earlier results on similar problems, and the three main ingredients all play an important role

    Learning Signal Representations for EEG Cross-Subject Channel Selection and Trial Classification

    Get PDF
    EEG technology finds applications in several domains. Currently, most EEG systems require subjects to wear several electrodes on the scalp to be effective. However, several channels might include noisy information, redundant signals, induce longer preparation times and increase computational times of any automated system for EEG decoding. One way to reduce the signal-to-noise ratio and improve classification accuracy is to combine channel selection with feature extraction, but EEG signals are known to present high inter-subject variability. In this work we introduce a novel algorithm for subject-independent channel selection of EEG recordings. Considering multi-channel trial recordings as statistical units and the EEG decoding task as the class of reference, the algorithm (i) exploits channel-specific 1D-Convolutional Neural Networks (1D-CNNs) as feature extractors in a supervised fashion to maximize class separability; (ii) it reduces a high dimensional multi-channel trial representation into a unique trial vector by concatenating the channels' embeddings and (iii) recovers the complex inter-channel relationships during channel selection, by exploiting an ensemble of AutoEncoders (AE) to identify from these vectors the most relevant channels to perform classification. After training, the algorithm can be exploited by transferring only the parametrized subgroup of selected channel-specific 1D-CNNs to new signals from new subjects and obtain low-dimensional and highly informative trial vectors to be fed to any classifier

    Deep recurrent–convolutional neural network for classification of simultaneous EEG–fNIRS signals

    Get PDF
    Brain–computer interface (BCI) is a powerful system for communicating between the brain and outside world. Traditional BCI systems work based on electroencephalogram (EEG) signals only. Recently, researchers have used a combination of EEG signals with other signals to improve the performance of BCI systems. Among these signals, the combination of EEG with functional near-infrared spectroscopy (fNIRS) has achieved favourable results. In most studies, only EEGs or fNIRs have been considered as chain-like sequences, and do not consider complex correlations between adjacent signals, neither in time nor channel location. In this study, a deep neural network model has been introduced to identify the exact objectives of the human brain by introducing temporal and spatial features. The proposed model incorporates the spatial relationship between EEG and fNIRS signals. This could be implemented by transforming the sequences of these chain-like signals into hierarchical three-rank tensors. The tests show that the proposed model has a precision of 99.6%

    Machine learning based brain signal decoding for intelligent adaptive deep brain stimulation

    Get PDF
    Sensing enabled implantable devices and next-generation neurotechnology allow real-time adjustments of invasive neuromodulation. The identification of symptom and disease-specific biomarkers in invasive brain signal recordings has inspired the idea of demand dependent adaptive deep brain stimulation (aDBS). Expanding the clinical utility of aDBS with machine learning may hold the potential for the next breakthrough in the therapeutic success of clinical brain computer interfaces. To this end, sophisticated machine learning algorithms optimized for decoding of brain states from neural time-series must be developed. To support this venture, this review summarizes the current state of machine learning studies for invasive neurophysiology. After a brief introduction to the machine learning terminology, the transformation of brain recordings into meaningful features for decoding of symptoms and behavior is described. Commonly used machine learning models are explained and analyzed from the perspective of utility for aDBS. This is followed by a critical review on good practices for training and testing to ensure conceptual and practical generalizability for real-time adaptation in clinical settings. Finally, first studies combining machine learning with aDBS are highlighted. This review takes a glimpse into the promising future of intelligent adaptive DBS (iDBS) and concludes by identifying four key ingredients on the road for successful clinical adoption: i) multidisciplinary research teams, ii) publicly available datasets, iii) open-source algorithmic solutions and iv) strong world-wide research collaborations.Fil: Merk, Timon. Charité – Universitätsmedizin Berlin; AlemaniaFil: Peterson, Victoria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; Argentina. Harvard Medical School; Estados UnidosFil: Köhler, Richard. Charité – Universitätsmedizin Berlin; AlemaniaFil: Haufe, Stefan. Charité – Universitätsmedizin Berlin; AlemaniaFil: Richardson, R. Mark. Harvard Medical School; Estados UnidosFil: Neumann, Wolf Julian. Charité – Universitätsmedizin Berlin; Alemani

    Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations

    Get PDF
    Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Most ongoing efforts have focused on training decoders on specific, stereotyped tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in natural settings requires adaptive strategies and scalable algorithms that require minimal supervision. Here we propose an unsupervised approach to decoding neural states from human brain recordings acquired in a naturalistic context. We demonstrate our approach on continuous long-term electrocorticographic (ECoG) data recorded over many days from the brain surface of subjects in a hospital room, with simultaneous audio and video recordings. We first discovered clusters in high-dimensional ECoG recordings and then annotated coherent clusters using speech and movement labels extracted automatically from audio and video recordings. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Our results show that our unsupervised approach can discover distinct behaviors from ECoG data, including moving, speaking and resting. We verify the accuracy of our approach by comparing to manual annotations. Projecting the discovered cluster centers back onto the brain, this technique opens the door to automated functional brain mapping in natural settings
    corecore