1 research outputs found
Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition
Inspite the emerging importance of Speech Emotion Recognition (SER), the
state-of-the-art accuracy is quite low and needs improvement to make commercial
applications of SER viable. A key underlying reason for the low accuracy is the
scarcity of emotion datasets, which is a challenge for developing any robust
machine learning model in general. In this paper, we propose a solution to this
problem: a multi-task learning framework that uses auxiliary tasks for which
data is abundantly available. We show that utilisation of this additional data
can improve the primary task of SER for which only limited labelled data is
available. In particular, we use gender identifications and speaker recognition
as auxiliary tasks, which allow the use of very large datasets, e.g., speaker
classification datasets. To maximise the benefit of multi-task learning, we
further use an adversarial autoencoder (AAE) within our framework, which has a
strong capability to learn powerful and discriminative features. Furthermore,
the unsupervised AAE in combination with the supervised classification networks
enables semi-supervised learning which incorporates a discriminative component
in the AAE unsupervised training pipeline. This semi-supervised learning
essentially helps to improve generalisation of our framework and thus leads to
improvements in SER performance. The proposed model is rigorously evaluated for
categorical and dimensional emotion, and cross-corpus scenarios. Experimental
results demonstrate that the proposed model achieves state-of-the-art
performance on two publicly available datasets.Comment: Accepted in IEEE Transactions on Affective Computin