3 research outputs found
RFNet: Riemannian Fusion Network for EEG-based Brain-Computer Interfaces
This paper presents the novel Riemannian Fusion Network (RFNet), a deep
neural architecture for learning spatial and temporal information from
Electroencephalogram (EEG) for a number of different EEG-based Brain Computer
Interface (BCI) tasks and applications. The spatial information relies on
Spatial Covariance Matrices (SCM) of multi-channel EEG, whose space form a
Riemannian Manifold due to the Symmetric and Positive Definite structure. We
exploit a Riemannian approach to map spatial information onto feature vectors
in Euclidean space. The temporal information characterized by features based on
differential entropy and logarithm power spectrum density is extracted from
different windows through time. Our network then learns the temporal
information by employing a deep long short-term memory network with a soft
attention mechanism. The output of the attention mechanism is used as the
temporal feature vector. To effectively fuse spatial and temporal information,
we use an effective fusion strategy, which learns attention weights applied to
embedding-specific features for decision making. We evaluate our proposed
framework on four public datasets from three popular fields of BCI, notably
emotion recognition, vigilance estimation, and motor imagery classification,
containing various types of tasks such as binary classification, multi-class
classification, and regression. RFNet approaches the state-of-the-art on one
dataset (SEED) and outperforms other methods on the other three datasets
(SEED-VIG, BCI-IV 2A, and BCI-IV 2B), setting new state-of-the-art values and
showing the robustness of our framework in EEG representation learning