20 research outputs found
Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features
Accurate, fast, and reliable multiclass classification of
electroencephalography (EEG) signals is a challenging task towards the
development of motor imagery brain-computer interface (MI-BCI) systems. We
propose enhancements to different feature extractors, along with a support
vector machine (SVM) classifier, to simultaneously improve classification
accuracy and execution time during training and testing. We focus on the
well-known common spatial pattern (CSP) and Riemannian covariance methods, and
significantly extend these two feature extractors to multiscale temporal and
spectral cases. The multiscale CSP features achieve 73.7015.90% (mean
standard deviation across 9 subjects) classification accuracy that surpasses
the state-of-the-art method [1], 70.614.70%, on the 4-class BCI
competition IV-2a dataset. The Riemannian covariance features outperform the
CSP by achieving 74.2715.5% accuracy and executing 9x faster in training
and 4x faster in testing. Using more temporal windows for Riemannian features
results in 75.4712.8% accuracy with 1.6x faster testing than CSP.Comment: Published as a conference paper at the IEEE European Signal
Processing Conference (EUSIPCO), 201
An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing
This paper presents an accurate and robust embedded motor-imagery
brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet,
matches the requirements of memory footprint and computational resources of
low-power microcontroller units (MCUs), such as the ARM Cortex-M family.
Furthermore, the paper presents a set of methods, including temporal
downsampling, channel selection, and narrowing of the classification window, to
further scale down the model to relax memory requirements with negligible
accuracy degradation. Experimental results on the Physionet EEG Motor
Movement/Imagery Dataset show that standard EEGNet achieves 82.43%, 75.07%, and
65.07% classification accuracy on 2-, 3-, and 4-class MI tasks in global
validation, outperforming the state-of-the-art (SoA) convolutional neural
network (CNN) by 2.05%, 5.25%, and 5.48%. Our novel method further scales down
the standard EEGNet at a negligible accuracy loss of 0.31% with 7.6x memory
footprint reduction and a small accuracy loss of 2.51% with 15x reduction. The
scaled models are deployed on a commercial Cortex-M4F MCU taking 101ms and
consuming 4.28mJ per inference for operating the smallest model, and on a
Cortex-M7 with 44ms and 18.1mJ per inference for the medium-sized model,
enabling a fully autonomous, wearable, and accurate low-power BCI
EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces
In recent years, deep learning (DL) has contributed significantly to the
improvement of motor-imagery brain-machine interfaces (MI-BMIs) based on
electroencephalography(EEG). While achieving high classification accuracy, DL
models have also grown in size, requiring a vast amount of memory and
computational resources. This poses a major challenge to an embedded BMI
solution that guarantees user privacy, reduced latency, and low power
consumption by processing the data locally. In this paper, we propose
EEG-TCNet, a novel temporal convolutional network (TCN) that achieves
outstanding accuracy while requiring few trainable parameters. Its low memory
footprint and low computational complexity for inference make it suitable for
embedded classification on resource-limited devices at the edge. Experimental
results on the BCI Competition IV-2a dataset show that EEG-TCNet achieves
77.35% classification accuracy in 4-class MI. By finding the optimal network
hyperparameters per subject, we further improve the accuracy to 83.84%.
Finally, we demonstrate the versatility of EEG-TCNet on the Mother of All BCI
Benchmarks (MOABB), a large scale test benchmark containing 12 different EEG
datasets with MI experiments. The results indicate that EEG-TCNet successfully
generalizes beyond one single dataset, outperforming the current
state-of-the-art (SoA) on MOABB by a meta-effect of 0.25.Comment: 8 pages, 6 figures, 5 table
Mixed-Precision Quantization and Parallel Implementation of Multispectral Riemannian Classification for Brain--Machine Interfaces
With Motor-Imagery (MI) Brain--Machine Interfaces (BMIs) we may control
machines by merely thinking of performing a motor action. Practical use cases
require a wearable solution where the classification of the brain signals is
done locally near the sensor using machine learning models embedded on
energy-efficient microcontroller units (MCUs), for assured privacy, user
comfort, and long-term usage. In this work, we provide practical insights on
the accuracy-cost tradeoff for embedded BMI solutions. Our proposed
Multispectral Riemannian Classifier reaches 75.1% accuracy on 4-class MI task.
We further scale down the model by quantizing it to mixed-precision
representations with a minimal accuracy loss of 1%, which is still 3.2% more
accurate than the state-of-the-art embedded convolutional neural network. We
implement the model on a low-power MCU with parallel processing units taking
only 33.39ms and consuming 1.304mJ per classification
MI-BMInet: An Efficient Convolutional Neural Network for Motor Imagery Brain--Machine Interfaces with EEG Channel Selection
A brain--machine interface (BMI) based on motor imagery (MI) enables the
control of devices using brain signals while the subject imagines performing a
movement. It plays an important role in prosthesis control and motor
rehabilitation and is a crucial element towards the future Internet of Minds
(IoM). To improve user comfort, preserve data privacy, and reduce the system's
latency, a new trend in wearable BMIs is to embed algorithms on low-power
microcontroller units (MCUs) to process the electroencephalographic (EEG) data
in real-time close to the sensors into the wearable device. However, most of
the classification models present in the literature are too resource-demanding,
making them unfit for low-power MCUs. This paper proposes an efficient
convolutional neural network (CNN) for EEG-based MI classification that
achieves comparable accuracy while being orders of magnitude less
resource-demanding and significantly more energy-efficient than
state-of-the-art (SoA) models for a long-lifetime battery operation. We propose
an automatic channel selection method based on spatial filters and quantize
both weights and activations to 8-bit precision to further reduce the model
complexity with negligible accuracy loss. Finally, we efficiently implement and
evaluate the proposed models on a parallel ultra-low power (PULP) MCU. The most
energy-efficient solution consumes only 50.10 uJ with an inference runtime of
5.53 ms and an accuracy of 82.51% while using 6.4x fewer EEG channels, becoming
the new SoA for embedded MI-BMI and defining a new Pareto frontier in the
three-way trade-off among accuracy, resource cost, and power usage
Feature Extraction Method Based on Filter Banks and Riemannian Tangent Space in Motor-Imagery BCI.
Optimal feature extraction for multi-category motor imagery brain-computer interfaces (MI-BCIs) is a research hotspot. The common spatial pattern (CSP) algorithm is one of the most widely used methods in MI-BCIs. However, its performance is adversely affected by variance in the operational frequency band and noise interference. Furthermore, the performance of CSP is not satisfactory when addressing multi-category classification problems. In this work, we propose a fusion method combining Filter Banks and Riemannian Tangent Space (FBRTS) in multiple time windows. FBRTS uses multiple filter banks to overcome the problem of variance in the operational frequency band. It also applies the Riemannian method to the covariance matrix extracted by the spatial filter to obtain more robust features in order to overcome the problem of noise interference. In addition, we use a One-Versus-Rest support vector machine (OVR-SVM) model to classify multi-category features. We evaluate our FBRTS method using BCI competition IV dataset 2a and 2b. The experimental results show that the average classification accuracy of our FBRTS method is 77.7% and 86.9% in datasets 2a and 2b respectively. By analyzing the influence of the different numbers of filter banks and time windows on the performance of our FBRTS method, we can identify the optimal number of filter banks and time windows. Additionally, our FBRTS method can obtain more distinctive features than the filter banks common spatial pattern (FBCSP) method in two-dimensional embedding space. These results show that our proposed method can improve the performance of MI-BCIs
Effective EEG analysis for advanced AI-driven motor imagery BCI systems
Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance.
Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels
selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel
selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets.Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance.
Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels
selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel
selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets