2,493 research outputs found

    Translation of EEG spatial filters from resting to motor imagery using independent component analysis.

    Get PDF
    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) often use spatial filters to improve signal-to-noise ratio of task-related EEG activities. To obtain robust spatial filters, large amounts of labeled data, which are often expensive and labor-intensive to obtain, need to be collected in a training procedure before online BCI control. Several studies have recently developed zero-training methods using a session-to-session scenario in order to alleviate this problem. To our knowledge, a state-to-state translation, which applies spatial filters derived from one state to another, has never been reported. This study proposes a state-to-state, zero-training method to construct spatial filters for extracting EEG changes induced by motor imagery. Independent component analysis (ICA) was separately applied to the multi-channel EEG in the resting and the motor imagery states to obtain motor-related spatial filters. The resultant spatial filters were then applied to single-trial EEG to differentiate left- and right-hand imagery movements. On a motor imagery dataset collected from nine subjects, comparable classification accuracies were obtained by using ICA-based spatial filters derived from the two states (motor imagery: 87.0%, resting: 85.9%), which were both significantly higher than the accuracy achieved by using monopolar scalp EEG data (80.4%). The proposed method considerably increases the practicality of BCI systems in real-world environments because it is less sensitive to electrode misalignment across different sessions or days and does not require annotated pilot data to derive spatial filters

    An uncued brain-computer interface using reservoir computing

    Get PDF
    Brain-Computer Interfaces are an important and promising avenue for possible next-generation assistive devices. In this article, we show how Reservoir Comput- ing – a computationally efficient way of training recurrent neural networks – com- bined with a novel feature selection algorithm based on Common Spatial Patterns can be used to drastically improve performance in an uncued motor imagery based Brain-Computer Interface (BCI). The objective of this BCI is to label each sample of EEG data as either motor imagery class 1 (e.g. left hand), motor imagery class 2 (e.g. right hand) or a rest state (i.e., no motor imagery). When comparing the re- sults of the proposed method with the results from the BCI Competition IV (where this dataset was introduced), it turns out that the proposed method outperforms the winner of the competition

    Sub-band common spatial pattern (SBCSP) for brain-computer interface

    Get PDF
    Brain-computer interface (BCI) is a system to translate humans thoughts into commands. For electroencephalography (EEG) based BCI, motor imagery is considered as one of the most effective ways. Different imagery activities can be classified based on the changes in mu and/or beta rhythms and their spatial distributions. However, the change in these rhythmic patterns varies from one subject to another. This causes an unavoidable time-consuming fine-tuning process in building a BCI for every subject. To address this issue, we propose a new method called sub-band common spatial pattern (SBCSP) to solve the problem. First, we decompose the EEG signals into sub-bands using a filter bank. Subsequently, we apply a discriminative analysis to extract SBCSP features. The SBCSP features are then fed into linear discriminant analyzers (LDA) to obtain scores which reflect the classification capability of each frequency band. Finally, the scores are fused to make decision. We evaluate two fusion methods: recursive band elimination (RBE) and meta-classifier (MC). We assess our approaches on a standard database from BCI Competition III. We also compare our method with two other approaches that address the same issue. The results show that our method outperforms the other two approaches and achieves similar result as compared to the best one in the literature which was obtained by a time-consuming fine-tuning process

    Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

    Full text link
    One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201

    Classifying motor imagery in presence of speech

    Get PDF
    In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech

    Unimanual versus bimanual motor imagery classifiers for assistive and rehabilitative brain computer interfaces

    Get PDF
    Bimanual movements are an integral part of everyday activities and are often included in rehabilitation therapies. Yet electroencephalography (EEG) based assistive and rehabilitative brain computer interface (BCI) systems typically rely on motor imagination (MI) of one limb at the time. In this study we present a classifier which discriminates between uni-and bimanual MI. Ten able bodied participants took part in cue based motor execution (ME) and MI tasks of the left (L), right (R) and both (B) hands. A 32 channel EEG was recorded. Three linear discriminant analysis classifiers, based on MI of L-B, B-R and B--L hands were created, with features based on wide band Common Spatial Patterns (CSP) 8-30 Hz, and band specifics Common Spatial Patterns (CSPb). Event related desynchronization (ERD) was significantly stronger during bimanual compared to unimanual ME on both hemispheres. Bimanual MI resulted in bilateral parietally shifted ERD of similar intensity to unimanual MI. The average classification accuracy for CSP and CSPb was comparable for L-R task (73±9% and 75±10% respectively) and for L-B task (73±11% and 70±9% respectively). However, for R-B task (67±3% and 72±6% respectively) it was significantly higher for CSPb (p=0.0351). Six participants whose L-R classification accuracy exceeded 70% were included in an on-line task a week later, using the unmodified offline CSPb classifier, achieving 69±3% and 66±3% accuracy for the L-R and R-B tasks respectively. Combined uni and bimanual BCI could be used for restoration of motor function of highly disabled patents and for motor rehabilitation of patients with motor deficits
    corecore