1,956 research outputs found
A large-scale evaluation framework for EEG deep learning architectures
EEG is the most common signal source for noninvasive BCI applications. For
such applications, the EEG signal needs to be decoded and translated into
appropriate actions. A recently emerging EEG decoding approach is deep learning
with Convolutional or Recurrent Neural Networks (CNNs, RNNs) with many
different architectures already published. Here we present a novel framework
for the large-scale evaluation of different deep-learning architectures on
different EEG datasets. This framework comprises (i) a collection of EEG
datasets currently including 100 examples (recording sessions) from six
different classification problems, (ii) a collection of different EEG decoding
algorithms, and (iii) a wrapper linking the decoders to the data as well as
handling structured documentation of all settings and (hyper-) parameters and
statistics, designed to ensure transparency and reproducibility. As an
applications example we used our framework by comparing three publicly
available CNN architectures: the Braindecode Deep4 ConvNet, Braindecode Shallow
ConvNet, and two versions of EEGNet. We also show how our framework can be used
to study similarities and differences in the performance of different decoding
methods across tasks. We argue that the deep learning EEG framework as
described here could help to tap the full potential of deep learning for BCI
applications.Comment: 7 pages, 3 figures, final version accepted for presentation at IEEE
SMC 2018 conferenc
Online multiclass EEG feature extraction and recognition using modified convolutional neural network method
Many techniques have been introduced to improve both brain-computer interface (BCI) steps: feature extraction and classification. One of the emerging trends in this field is the implementation of deep learning algorithms. There is a limited number of studies that investigated the application of deep learning techniques in electroencephalography (EEG) feature extraction and classification. This work is intended to apply deep learning for both stages: feature extraction and classification. This paper proposes a modified convolutional neural network (CNN) feature extractorclassifier algorithm to recognize four different EEG motor imagery (MI). In addition, a four-class linear discriminant analysis (LDR) classifier model was built and compared to the proposed CNN model. The paper showed very good results with 92.8% accuracy for one EEG four-class MI set and 85.7% for another set. The results showed that the proposed CNN model outperforms multi-class linear discriminant analysis with an accuracy increase of 28.6% and 17.9% for both MI sets, respectively. Moreover, it has been shown that majority voting for five repetitions introduced an accuracy advantage of 15% and 17.2% for both EEG sets, compared with single trials. This confirms that increasing the number of trials for the same MI gesture improves the recognition accurac
Towards Real-World BCI: CCSPNet, A Compact Subject-Independent Motor Imagery Framework
A conventional subject-dependent (SD) brain-computer interface (BCI) requires
a complete data-gathering, training, and calibration phase for each user before
it can be used. In recent years, a number of subject-independent (SI) BCIs have
been developed. However, there are many problems preventing them from being
used in real-world BCI applications. A weaker performance compared to the
subject-dependent (SD) approach, and a relatively large model requiring high
computational power are the most important ones. Therefore, a potential
real-world BCI would greatly benefit from a compact low-power
subject-independent BCI framework, ready to be used immediately after the user
puts it on. To move towards this goal, we propose a novel subject-independent
BCI framework named CCSPNet (Convolutional Common Spatial Pattern Network)
trained on the motor imagery (MI) paradigm of a large-scale
electroencephalography (EEG) signals database consisting of 21600 trials for 54
subjects performing two-class hand-movement MI tasks. The proposed framework
applies a wavelet kernel convolutional neural network (WKCNN) and a temporal
convolutional neural network (TCNN) in order to represent and extract the
diverse spectral features of EEG signals. The outputs of the convolutional
layers go through a common spatial pattern (CSP) algorithm for spatial feature
extraction. The number of CSP features is reduced by a dense neural network,
and the final class label is determined by a linear discriminative analysis
(LDA) classifier. The CCSPNet framework evaluation results show that it is
possible to have a low-power compact BCI that achieves both SD and SI
performance comparable to complex and computationally expensive.Comment: 15 pages, 6 figures, 6 tables, 1 algorith
- …