1,345 research outputs found

    Spatial filters selection towards a rehabilitation BCI

    Get PDF
    Introducing BCI technology in supporting motor imagery (MI) training has revealed the rehabilitative potential of MI, contributing to significantly better motor functional outcomes in stroke patients. To provide the most accurate and personalized feedback during the treatment, several stages of the electroencephalographic signal processing have to be optimized, including spatial filtering. This study focuses on data-independent approaches to optimize spatial filtering step. Specific aims were: i) assessment of spatial filters' performance in relation to the hand and foot scalp areas; ii) evaluation of simultaneous use of multiple spatial filters; iii) minimization of the number of electrodes needed for training. Our findings indicate that different spatial filters showed different performance related to the scalp areas considered. The simultaneous use of EEG signals conditioned with different spatial filters could either improve classification performance or, at same level of performance could lead to a reduction of the number of electrodes needed for successive training, thus improving usability of BCIs in clinical rehabilitation context

    Unimanual versus bimanual motor imagery classifiers for assistive and rehabilitative brain computer interfaces

    Get PDF
    Bimanual movements are an integral part of everyday activities and are often included in rehabilitation therapies. Yet electroencephalography (EEG) based assistive and rehabilitative brain computer interface (BCI) systems typically rely on motor imagination (MI) of one limb at the time. In this study we present a classifier which discriminates between uni-and bimanual MI. Ten able bodied participants took part in cue based motor execution (ME) and MI tasks of the left (L), right (R) and both (B) hands. A 32 channel EEG was recorded. Three linear discriminant analysis classifiers, based on MI of L-B, B-R and B--L hands were created, with features based on wide band Common Spatial Patterns (CSP) 8-30 Hz, and band specifics Common Spatial Patterns (CSPb). Event related desynchronization (ERD) was significantly stronger during bimanual compared to unimanual ME on both hemispheres. Bimanual MI resulted in bilateral parietally shifted ERD of similar intensity to unimanual MI. The average classification accuracy for CSP and CSPb was comparable for L-R task (73±9% and 75±10% respectively) and for L-B task (73±11% and 70±9% respectively). However, for R-B task (67±3% and 72±6% respectively) it was significantly higher for CSPb (p=0.0351). Six participants whose L-R classification accuracy exceeded 70% were included in an on-line task a week later, using the unmodified offline CSPb classifier, achieving 69±3% and 66±3% accuracy for the L-R and R-B tasks respectively. Combined uni and bimanual BCI could be used for restoration of motor function of highly disabled patents and for motor rehabilitation of patients with motor deficits

    GUIDER: a GUI for semiautomatic, physiologically driven EEG feature selection for a rehabilitation BCI

    Get PDF
    GUIDER is a graphical user interface developed in MATLAB software environment to identify electroencephalography (EEG)-based brain computer interface (BCI) control features for a rehabilitation application (i.e. post-stroke motor imagery training). In this context, GUIDER aims to combine physiological and machine learning approaches. Indeed, GUIDER allows therapists to set parameters and constraints according to the rehabilitation principles (e.g. affected hemisphere, sensorimotor relevant frequencies) and foresees an automatic method to select the features among the defined subset. As a proof of concept, we compared offline performances between manual, just based on operator’s expertise and experience, and GUIDER semiautomatic features selection on BCI data collected from stroke patients during BCI-supported motor imagery training. Preliminary results suggest that this semiautomatic approach could be successfully applied to support the human selection reducing operator dependent variability in view of future multi-centric clinical trials

    Is implicit motor imagery a reliable strategy for a brain computer interface?

    Get PDF
    Explicit motor imagery (eMI) is a widely used brain computer interface (BCI) paradigm, but not everybody can accomplish this task. Here we propose a BCI based on implicit motor imagery (iMI). We compared classification accuracy between eMI and iMI of hands. Fifteen able bodied people were asked to judge the laterality of hand images presented on a computer screen in a lateral or medial orientation. This judgement task is known to require mental rotation of a person’s own hands which in turn is thought to involve iMI. The subjects were also asked to perform eMI of the hands. Their electroencephalography (EEG) was recorded. Linear classifiers were designed based on common spatial patterns. For discrimination between left and right hand the classifier achieved maximum of 81 ± 8% accuracy for eMI and 83 ± 3% for iMI. These results show that iMI can be used to achieve similar classification accuracy as eMI. Additional classification was performed between iMI in medial and lateral orientations of a single hand; the classifier achieved 81 ± 7% for the left and 78 ± 7% for the right hand which indicate distinctive spatial patterns of cortical activity for iMI of a single hand in different directions. These results suggest that a special brain computer interface based on iMI may be constructed, for people who cannot perform explicit imagination, for rehabilitation of movement or for treatment of bodily spatial neglect

    Spatial Filtering Pipeline Evaluation of Cortically Coupled Computer Vision System for Rapid Serial Visual Presentation

    Get PDF
    Rapid Serial Visual Presentation (RSVP) is a paradigm that supports the application of cortically coupled computer vision to rapid image search. In RSVP, images are presented to participants in a rapid serial sequence which can evoke Event-related Potentials (ERPs) detectable in their Electroencephalogram (EEG). The contemporary approach to this problem involves supervised spatial filtering techniques which are applied for the purposes of enhancing the discriminative information in the EEG data. In this paper we make two primary contributions to that field: 1) We propose a novel spatial filtering method which we call the Multiple Time Window LDA Beamformer (MTWLB) method; 2) we provide a comprehensive comparison of nine spatial filtering pipelines using three spatial filtering schemes namely, MTWLB, xDAWN, Common Spatial Pattern (CSP) and three linear classification methods Linear Discriminant Analysis (LDA), Bayesian Linear Regression (BLR) and Logistic Regression (LR). Three pipelines without spatial filtering are used as baseline comparison. The Area Under Curve (AUC) is used as an evaluation metric in this paper. The results reveal that MTWLB and xDAWN spatial filtering techniques enhance the classification performance of the pipeline but CSP does not. The results also support the conclusion that LR can be effective for RSVP based BCI if discriminative features are available

    An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing

    Full text link
    This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs), such as the ARM Cortex-M family. Furthermore, the paper presents a set of methods, including temporal downsampling, channel selection, and narrowing of the classification window, to further scale down the model to relax memory requirements with negligible accuracy degradation. Experimental results on the Physionet EEG Motor Movement/Imagery Dataset show that standard EEGNet achieves 82.43%, 75.07%, and 65.07% classification accuracy on 2-, 3-, and 4-class MI tasks in global validation, outperforming the state-of-the-art (SoA) convolutional neural network (CNN) by 2.05%, 5.25%, and 5.48%. Our novel method further scales down the standard EEGNet at a negligible accuracy loss of 0.31% with 7.6x memory footprint reduction and a small accuracy loss of 2.51% with 15x reduction. The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model, enabling a fully autonomous, wearable, and accurate low-power BCI
    corecore