1,009 research outputs found
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
EEG-Based Emotion Recognition Using Regularized Graph Neural Networks
Electroencephalography (EEG) measures the neuronal activities in different
brain regions via electrodes. Many existing studies on EEG-based emotion
recognition do not fully exploit the topology of EEG channels. In this paper,
we propose a regularized graph neural network (RGNN) for EEG-based emotion
recognition. RGNN considers the biological topology among different brain
regions to capture both local and global relations among different EEG
channels. Specifically, we model the inter-channel relations in EEG signals via
an adjacency matrix in a graph neural network where the connection and
sparseness of the adjacency matrix are inspired by neuroscience theories of
human brain organization. In addition, we propose two regularizers, namely
node-wise domain adversarial training (NodeDAT) and emotion-aware distribution
learning (EmotionDL), to better handle cross-subject EEG variations and noisy
labels, respectively. Extensive experiments on two public datasets, SEED and
SEED-IV, demonstrate the superior performance of our model than
state-of-the-art models in most experimental settings. Moreover, ablation
studies show that the proposed adjacency matrix and two regularizers contribute
consistent and significant gain to the performance of our RGNN model. Finally,
investigations on the neuronal activities reveal important brain regions and
inter-channel relations for EEG-based emotion recognition
Continuous Capsule Network Method for Improving Electroencephalogram-Based Emotion Recognition
The convolution process in the Capsule Network method can result in a loss of spatial data from the Electroencephalogram signal, despite its ability to characterize spatial information from Electroencephalogram signals. Therefore, this study applied the Continuous Capsule Network method to overcome problems associated with emotion recognition based on Electroencephalogram signals using the optimal architecture of the (1) 1st, 2nd, 3rd, and 4th Continuous Convolution layers with values of 64, 128, 256, and 64, respectively, and (2) kernel sizes of 2×2×4, 2×2×64, and 2×2×128 for the 1st, 2nd, and 3rd Continuous Convolution layers, and 1×1×256 for the 4th. Several methods were also used to support the Continuous Capsule Network process, such as the Differential Entropy and 3D Cube methods for the feature extraction and representation processes. These methods were chosen based on their ability to characterize spatial and low-frequency information from Electroencephalogram signals. By testing the DEAP dataset, these proposed methods achieved accuracies of 91.35, 93.67, and 92.82% for the four categories of emotions, two categories of arousal, and valence, respectively. Furthermore, on the DREAMER dataset, these proposed methods achieved accuracies of 94.23, 96.66, and 96.05% for the four categories of emotions, the two categories of arousal, and valence, respectively. Finally, on the AMIGOS dataset, these proposed methods achieved accuracies of 96.20, 97.96, and 97.32% for the four categories of emotions, the two categories of arousal, and valence, respectively. Doi: 10.28991/ESJ-2023-07-01-09 Full Text: PD
Spatial-temporal Transformers for EEG Emotion Recognition
Electroencephalography (EEG) is a popular and effective tool for emotion
recognition. However, the propagation mechanisms of EEG in the human brain and
its intrinsic correlation with emotions are still obscure to researchers. This
work proposes four variant transformer frameworks~(spatial attention, temporal
attention, sequential spatial-temporal attention and simultaneous
spatial-temporal attention) for EEG emotion recognition to explore the
relationship between emotion and spatial-temporal EEG features. Specifically,
spatial attention and temporal attention are to learn the topological structure
information and time-varying EEG characteristics for emotion recognition
respectively. Sequential spatial-temporal attention does the spatial attention
within a one-second segment and temporal attention within one sample
sequentially to explore the influence degree of emotional stimulation on EEG
signals of diverse EEG electrodes in the same temporal segment. The
simultaneous spatial-temporal attention, whose spatial and temporal attention
are performed simultaneously, is used to model the relationship between
different spatial features in different time segments. The experimental results
demonstrate that simultaneous spatial-temporal attention leads to the best
emotion recognition accuracy among the design choices, indicating modeling the
correlation of spatial and temporal features of EEG signals is significant to
emotion recognition
The challenges of emotion recognition methods based on electroencephalogram signals: a literature review
Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set
- …