1,158 research outputs found
Nonparallel Emotional Speech Conversion
We propose a nonparallel data-driven emotional speech conversion method. It
enables the transfer of emotion-related characteristics of a speech signal
while preserving the speaker's identity and linguistic content. Most existing
approaches require parallel data and time alignment, which is not available in
most real applications. We achieve nonparallel training based on an
unsupervised style transfer technique, which learns a translation model between
two distributions instead of a deterministic one-to-one mapping between paired
examples. The conversion model consists of an encoder and a decoder for each
emotion domain. We assume that the speech signal can be decomposed into an
emotion-invariant content code and an emotion-related style code in latent
space. Emotion conversion is performed by extracting and recombining the
content code of the source speech and the style code of the target emotion. We
tested our method on a nonparallel corpora with four emotions. Both subjective
and objective evaluations show the effectiveness of our approach.Comment: Published in INTERSPEECH 2019, 5 pages, 6 figures. Simulation
available at http://www.jian-gao.org/emoga
Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching
Automatic emotion recognition is an active research topic with wide range of
applications. Due to the high manual annotation cost and inevitable label
ambiguity, the development of emotion recognition dataset is limited in both
scale and quality. Therefore, one of the key challenges is how to build
effective models with limited data resource. Previous works have explored
different approaches to tackle this challenge including data enhancement,
transfer learning, and semi-supervised learning etc. However, the weakness of
these existing approaches includes such as training instability, large
performance loss during transfer, or marginal improvement.
In this work, we propose a novel semi-supervised multi-modal emotion
recognition model based on cross-modality distribution matching, which
leverages abundant unlabeled data to enhance the model training under the
assumption that the inner emotional status is consistent at the utterance level
across modalities.
We conduct extensive experiments to evaluate the proposed model on two
benchmark datasets, IEMOCAP and MELD. The experiment results prove that the
proposed semi-supervised learning model can effectively utilize unlabeled data
and combine multi-modalities to boost the emotion recognition performance,
which outperforms other state-of-the-art approaches under the same condition.
The proposed model also achieves competitive capacity compared with existing
approaches which take advantage of additional auxiliary information such as
speaker and interaction context.Comment: 10 pages, 5 figures, to be published on ACM Multimedia 202
Learning deep physiological models of affect
Feature extraction and feature selection are crucial
phases in the process of affective modeling. Both, however,
incorporate substantial limitations that hinder the development
of reliable and accurate models of affect. For the purpose of
modeling affect manifested through physiology, this paper builds
on recent advances in machine learning with deep learning
(DL) approaches. The efficiency of DL algorithms that train
artificial neural network models is tested and compared against
standard feature extraction and selection approaches followed
in the literature. Results on a game data corpus — containing
players’ physiological signals (i.e. skin conductance and blood
volume pulse) and subjective self-reports of affect — reveal that
DL outperforms manual ad-hoc feature extraction as it yields
significantly more accurate affective models. Moreover, it appears
that DL meets and even outperforms affective models that are
boosted by automatic feature selection, for several of the scenarios
examined. As the DL method is generic and applicable to any
affective modeling task, the key findings of the paper suggest
that ad-hoc feature extraction and selection — to a lesser degree
— could be bypassed.The authors would like to thank Tobias Mahlmann for his
work on the development and administration of the cluster
used to run the experiments. Special thanks for proofreading
goes to Yana Knight. Thanks also go to the Theano development
team, to all participants in our experiments, and to
Ubisoft, NSERC and Canada Research Chairs for funding.
This work is funded, in part, by the ILearnRW (project no:
318803) and the C2Learn (project no. 318480) FP7 ICT EU
projects.peer-reviewe
- …