2 research outputs found
Multimodal Emotion Recognition Using Deep Canonical Correlation Analysis
Multimodal signals are more powerful than unimodal data for emotion
recognition since they can represent emotions more comprehensively. In this
paper, we introduce deep canonical correlation analysis (DCCA) to multimodal
emotion recognition. The basic idea behind DCCA is to transform each modality
separately and coordinate different modalities into a hyperspace by using
specified canonical correlation analysis constraints. We evaluate the
performance of DCCA on five multimodal datasets: the SEED, SEED-IV, SEED-V,
DEAP, and DREAMER datasets. Our experimental results demonstrate that DCCA
achieves state-of-the-art recognition accuracy rates on all five datasets:
94.58% on the SEED dataset, 87.45% on the SEED-IV dataset, 84.33% and 85.62%
for two binary classification tasks and 88.51% for a four-category
classification task on the DEAP dataset, 83.08% on the SEED-V dataset, and
88.99%, 90.57%, and 90.67% for three binary classification tasks on the DREAMER
dataset. We also compare the noise robustness of DCCA with that of existing
methods when adding various amounts of noise to the SEED-V dataset. The
experimental results indicate that DCCA has greater robustness. By visualizing
feature distributions with t-SNE and calculating the mutual information between
different modalities before and after using DCCA, we find that the features
transformed by DCCA from different modalities are more homogeneous and
discriminative across emotions
Investigating EEG-Based Functional Connectivity Patterns for Multimodal Emotion Recognition
Compared with the rich studies on the motor brain-computer interface (BCI),
the recently emerging affective BCI presents distinct challenges since the
brain functional connectivity networks involving emotion are not well
investigated. Previous studies on emotion recognition based on
electroencephalography (EEG) signals mainly rely on single-channel-based
feature extraction methods. In this paper, we propose a novel emotion-relevant
critical subnetwork selection algorithm and investigate three EEG functional
connectivity network features: strength, clustering coefficient, and
eigenvector centrality. The discrimination ability of the EEG connectivity
features in emotion recognition is evaluated on three public emotion EEG
datasets: SEED, SEED-V, and DEAP. The strength feature achieves the best
classification performance and outperforms the state-of-the-art differential
entropy feature based on single-channel analysis. The experimental results
reveal that distinct functional connectivity patterns are exhibited for the
five emotions of disgust, fear, sadness, happiness, and neutrality.
Furthermore, we construct a multimodal emotion recognition model by combining
the functional connectivity features from EEG and the features from eye
movements or physiological signals using deep canonical correlation analysis.
The classification accuracies of multimodal emotion recognition are 95.08/6.42%
on the SEED dataset, 84.51/5.11% on the SEED-V dataset, and 85.34/2.90% and
86.61/3.76% for arousal and valence on the DEAP dataset, respectively. The
results demonstrate the complementary representation properties of the EEG
connectivity features with eye movement data. In addition, we find that the
brain networks constructed with 18 channels achieve comparable performance with
that of the 62-channel network in multimodal emotion recognition and enable
easier setups for BCI systems in real scenarios.Comment: 15 pages, 11 figure