30,127 research outputs found
Monte Carlo dropout for uncertainty estimation and motor imagery classification
Motor Imagery (MI)-based Brain–Computer Interfaces (BCIs) have been widely used as an alternative communication channel to patients with severe motor disabilities, achieving high classification accuracy through machine learning techniques. Recently, deep learning techniques have spotlighted the state-of-the-art of MI-based BCIs. These techniques still lack strategies to quantify predictive uncertainty and may produce overconfident predictions. In this work, methods to enhance the performance of existing MI-based BCIs are proposed in order to obtain a more reliable system for real application scenarios. First, the Monte Carlo dropout (MCD) method is proposed on MI deep neural models to improve classification and provide uncertainty estimation. This approach was implemented using Shallow Convolutional Neural Network (SCNN-MCD) and with an ensemble model (E-SCNN-MCD). As another contribution, to discriminate MI task predictions of high uncertainty, a threshold approach is introduced and tested for both SCNN-MCD and E-SCNN-MCD approaches. The BCI Competition IV Databases 2a and 2b were used to evaluate the proposed methods for both subject-specific and non-subject-specific strategies, obtaining encouraging results for MI recognition
Classifying music perception and imagination using EEG
This study explored whether we could accurately classify perceived and imagined musical stimuli from EEG data. Successful EEG-based classification of what an individual is imagining could pave the way for novel communication techniques, such as brain-computer interfaces. We recorded EEG with a 64-channel BioSemi system while participants heard or imagined different musical stimuli. Using principal components analysis, we identified components common to both the perception and imagination conditions however, the time courses of the components did not allow for stimuli classification. We then applied deep learning techniques using a convolutional neural network. This technique enabled us to classify perception of music with a statistically significant accuracy of 28.7%, but we were unable to classify imagination of music (accuracy = 7.41%). Future studies should aim to determine which characteristics of music are driving perception classification rates, and to capitalize on these characteristics to raise imagination classification rates
Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces
With the advent of advanced machine learning methods, the performance of brain-computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance
Upper Limb Movement Execution Classification using Electroencephalography for Brain Computer Interface
An accurate classification of upper limb movements using
electroencephalography (EEG) signals is gaining significant importance in
recent years due to the prevalence of brain-computer interfaces. The upper
limbs in the human body are crucial since different skeletal segments combine
to make a range of motion that helps us in our trivial daily tasks. Decoding
EEG-based upper limb movements can be of great help to people with spinal cord
injury (SCI) or other neuro-muscular diseases such as amyotrophic lateral
sclerosis (ALS), primary lateral sclerosis, and periodic paralysis. This can
manifest in a loss of sensory and motor function, which could make a person
reliant on others to provide care in day-to-day activities. We can detect and
classify upper limb movement activities, whether they be executed or imagined
using an EEG-based brain-computer interface (BCI). Toward this goal, we focus
our attention on decoding movement execution (ME) of the upper limb in this
study. For this purpose, we utilize a publicly available EEG dataset that
contains EEG signal recordings from fifteen subjects acquired using a
61-channel EEG device. We propose a method to classify four ME classes for
different subjects using spectrograms of the EEG data through pre-trained deep
learning (DL) models. Our proposed method of using EEG spectrograms for the
classification of ME has shown significant results, where the highest average
classification accuracy (for four ME classes) obtained is 87.36%, with one
subject achieving the best classification accuracy of 97.03%
Graph Neural Networks on SPD Manifolds for Motor Imagery Classification: A Perspective from the Time-Frequency Analysis
Motor imagery (MI) classification is one of the most widely-concern research
topics in Electroencephalography (EEG)-based brain-computer interfaces (BCIs)
with extensive industry value. The MI-EEG classifiers' tendency has changed
fundamentally over the past twenty years, while classifiers' performance is
gradually increasing. In particular, owing to the need for characterizing
signals' non-Euclidean inherence, the first geometric deep learning (GDL)
framework, Tensor-CSPNet, has recently emerged in the BCI study. In essence,
Tensor-CSPNet is a deep learning-based classifier on the second-order
statistics of EEGs. In contrast to the first-order statistics, using these
second-order statistics is the classical treatment of EEG signals, and the
discriminative information contained in these second-order statistics is
adequate for MI-EEG classification. In this study, we present another GDL
classifier for MI-EEG classification called Graph-CSPNet, using graph-based
techniques to simultaneously characterize the EEG signals in both the time and
frequency domains. It is realized from the perspective of the time-frequency
analysis that profoundly influences signal processing and BCI studies. Contrary
to Tensor-CSPNet, the architecture of Graph-CSPNet is further simplified with
more flexibility to cope with variable time-frequency resolution for signal
segmentation to capture the localized fluctuations. In the experiments,
Graph-CSPNet is evaluated on subject-specific scenarios from two well-used
MI-EEG datasets and produces near-optimal classification accuracies.Comment: 16 pages, 5 figures, 9 Tables; This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessibl
- …