7,043 research outputs found
EEG-Based Emotion Recognition Using Regularized Graph Neural Networks
Electroencephalography (EEG) measures the neuronal activities in different
brain regions via electrodes. Many existing studies on EEG-based emotion
recognition do not fully exploit the topology of EEG channels. In this paper,
we propose a regularized graph neural network (RGNN) for EEG-based emotion
recognition. RGNN considers the biological topology among different brain
regions to capture both local and global relations among different EEG
channels. Specifically, we model the inter-channel relations in EEG signals via
an adjacency matrix in a graph neural network where the connection and
sparseness of the adjacency matrix are inspired by neuroscience theories of
human brain organization. In addition, we propose two regularizers, namely
node-wise domain adversarial training (NodeDAT) and emotion-aware distribution
learning (EmotionDL), to better handle cross-subject EEG variations and noisy
labels, respectively. Extensive experiments on two public datasets, SEED and
SEED-IV, demonstrate the superior performance of our model than
state-of-the-art models in most experimental settings. Moreover, ablation
studies show that the proposed adjacency matrix and two regularizers contribute
consistent and significant gain to the performance of our RGNN model. Finally,
investigations on the neuronal activities reveal important brain regions and
inter-channel relations for EEG-based emotion recognition
Multi-modal Approach for Affective Computing
Throughout the past decade, many studies have classified human emotions using
only a single sensing modality such as face video, electroencephalogram (EEG),
electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of
these studies are constrained by the limitations of these modalities such as
the absence of physiological biomarkers in the face-video analysis, poor
spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant
research has been conducted to compare the merits of these modalities and
understand how to best use them individually and jointly. Using multi-modal
AMIGOS dataset, this study compares the performance of human emotion
classification using multiple computational approaches applied to face videos
and various bio-sensing modalities. Using a novel method for compensating
physiological baseline we show an increase in the classification accuracy of
various approaches that we use. Finally, we present a multi-modal
emotion-classification approach in the domain of affective computing research.Comment: Published in IEEE 40th International Engineering in Medicine and
Biology Conference (EMBC) 201
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
- …