6,455 research outputs found

    Unimanual versus bimanual motor imagery classifiers for assistive and rehabilitative brain computer interfaces

    Get PDF
    Bimanual movements are an integral part of everyday activities and are often included in rehabilitation therapies. Yet electroencephalography (EEG) based assistive and rehabilitative brain computer interface (BCI) systems typically rely on motor imagination (MI) of one limb at the time. In this study we present a classifier which discriminates between uni-and bimanual MI. Ten able bodied participants took part in cue based motor execution (ME) and MI tasks of the left (L), right (R) and both (B) hands. A 32 channel EEG was recorded. Three linear discriminant analysis classifiers, based on MI of L-B, B-R and B--L hands were created, with features based on wide band Common Spatial Patterns (CSP) 8-30 Hz, and band specifics Common Spatial Patterns (CSPb). Event related desynchronization (ERD) was significantly stronger during bimanual compared to unimanual ME on both hemispheres. Bimanual MI resulted in bilateral parietally shifted ERD of similar intensity to unimanual MI. The average classification accuracy for CSP and CSPb was comparable for L-R task (73±9% and 75±10% respectively) and for L-B task (73±11% and 70±9% respectively). However, for R-B task (67±3% and 72±6% respectively) it was significantly higher for CSPb (p=0.0351). Six participants whose L-R classification accuracy exceeded 70% were included in an on-line task a week later, using the unmodified offline CSPb classifier, achieving 69±3% and 66±3% accuracy for the L-R and R-B tasks respectively. Combined uni and bimanual BCI could be used for restoration of motor function of highly disabled patents and for motor rehabilitation of patients with motor deficits

    Discriminative Tandem Features for HMM-based EEG Classification

    Get PDF
    Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)

    Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals

    Full text link
    An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subjects active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems, is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large- scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the- art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.Comment: 10 page

    Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features

    Full text link
    Accurate, fast, and reliable multiclass classification of electroencephalography (EEG) signals is a challenging task towards the development of motor imagery brain-computer interface (MI-BCI) systems. We propose enhancements to different feature extractors, along with a support vector machine (SVM) classifier, to simultaneously improve classification accuracy and execution time during training and testing. We focus on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases. The multiscale CSP features achieve 73.70±\pm15.90% (mean±\pm standard deviation across 9 subjects) classification accuracy that surpasses the state-of-the-art method [1], 70.6±\pm14.70%, on the 4-class BCI competition IV-2a dataset. The Riemannian covariance features outperform the CSP by achieving 74.27±\pm15.5% accuracy and executing 9x faster in training and 4x faster in testing. Using more temporal windows for Riemannian features results in 75.47±\pm12.8% accuracy with 1.6x faster testing than CSP.Comment: Published as a conference paper at the IEEE European Signal Processing Conference (EUSIPCO), 201
    • 

    corecore