1,127 research outputs found

    Motor imagery based EEG features visualization for BCI applications

    Get PDF
    Over recent years, electroencephalography's (EEG) use in the state-of-the-art brain-computer interface (BCI) technology has broadened to augment the quality of life, both with medical and non-medical applicationS. For medical applications, the availability of real-time data for processing, which could be used as command Signals to control robotic devices, is limited to specific platformS. This paper focuses on the possibility to analyse and visualize EEG signal features using OpenViBE acquisition platform in offline mode apart from its default real-time processing capability, and the options available for processing of data in offline mode. We employed OpenViBE platform to acquire EEG Signals, pre-process it and extract features for a BCI System. For testing purposes, we analysed and tried to visualize EEG data offline, by developing scenarios, using method for quantification of event-related (de)synchronization ERD/ERS patterns, as well as, built in signal processing algorithms available in OpenViBE-designer toolbox. Acquired data was based on deployment of standard Graz BCI experimental protocol, used for foot kinaesthetic motor imagery (KMI). Results clearly reflect that the platform OpenViBE is a streaming tool that encourages processing and analysis of EEG data online, contrary to analysis, or visualization of data in offline, or global mode. For offline analysis and visualization of data, other relevant platforms are discussed. In online execution of BCI, OpenViBE is a potential tool for the control of wearable lower-limb devices, robotic vehicles and rehabilitation equipment. Other applications include remote control of mechatronic devices, or driving of passenger cars by human thoughtS

    Translation of EEG spatial filters from resting to motor imagery using independent component analysis.

    Get PDF
    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) often use spatial filters to improve signal-to-noise ratio of task-related EEG activities. To obtain robust spatial filters, large amounts of labeled data, which are often expensive and labor-intensive to obtain, need to be collected in a training procedure before online BCI control. Several studies have recently developed zero-training methods using a session-to-session scenario in order to alleviate this problem. To our knowledge, a state-to-state translation, which applies spatial filters derived from one state to another, has never been reported. This study proposes a state-to-state, zero-training method to construct spatial filters for extracting EEG changes induced by motor imagery. Independent component analysis (ICA) was separately applied to the multi-channel EEG in the resting and the motor imagery states to obtain motor-related spatial filters. The resultant spatial filters were then applied to single-trial EEG to differentiate left- and right-hand imagery movements. On a motor imagery dataset collected from nine subjects, comparable classification accuracies were obtained by using ICA-based spatial filters derived from the two states (motor imagery: 87.0%, resting: 85.9%), which were both significantly higher than the accuracy achieved by using monopolar scalp EEG data (80.4%). The proposed method considerably increases the practicality of BCI systems in real-world environments because it is less sensitive to electrode misalignment across different sessions or days and does not require annotated pilot data to derive spatial filters

    An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing

    Full text link
    This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet, matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs), such as the ARM Cortex-M family. Furthermore, the paper presents a set of methods, including temporal downsampling, channel selection, and narrowing of the classification window, to further scale down the model to relax memory requirements with negligible accuracy degradation. Experimental results on the Physionet EEG Motor Movement/Imagery Dataset show that standard EEGNet achieves 82.43%, 75.07%, and 65.07% classification accuracy on 2-, 3-, and 4-class MI tasks in global validation, outperforming the state-of-the-art (SoA) convolutional neural network (CNN) by 2.05%, 5.25%, and 5.48%. Our novel method further scales down the standard EEGNet at a negligible accuracy loss of 0.31% with 7.6x memory footprint reduction and a small accuracy loss of 2.51% with 15x reduction. The scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and consuming 4.28mJ per inference for operating the smallest model, and on a Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model, enabling a fully autonomous, wearable, and accurate low-power BCI
    • …
    corecore