Despite the recent progress in speech emotion recognition (SER),
state-of-the-art systems are unable to achieve improved performance in
cross-language settings. In this paper, we propose a Multimodal Dual Attention
Transformer (MDAT) model to improve cross-language SER. Our model utilises
pre-trained models for multimodal feature extraction and is equipped with a
dual attention mechanism including graph attention and co-attention to capture
complex dependencies across different modalities and achieve improved
cross-language SER results using minimal target language data. In addition, our
model also exploits a transformer encoder layer for high-level feature
representation to improve emotion classification accuracy. In this way, MDAT
performs refinement of feature representation at various stages and provides
emotional salient features to the classification layer. This novel approach
also ensures the preservation of modality-specific emotional information while
enhancing cross-modality and cross-language interactions. We assess our model's
performance on four publicly available SER datasets and establish its superior
effectiveness compared to recent approaches and baseline models.Comment: Under Review IEEE TM