2 research outputs found
Emotion Invariant Speaker Embeddings for Speaker Identification with Emotional Speech
Emotional state of a speaker is found to have significant effect in speech
production, which can deviate speech from that arising from neutral state. This
makes identifying speakers with different emotions a challenging task as
generally the speaker models are trained using neutral speech. In this work, we
propose to overcome this problem by creation of emotion invariant speaker
embedding. We learn an extractor network that maps the test embeddings with
different emotions obtained using i-vector based system to an emotion invariant
space. The resultant test embeddings thus become emotion invariant and thereby
compensate the mismatch between various emotional states. The studies are
conducted using four different emotion classes from IEMOCAP database. We obtain
an absolute improvement of 2.6% in accuracy for speaker identification studies
using emotion invariant speaker embedding against average speaker model based
framework with different emotions.Comment: Accepted for publication in APSIPA ASC 202
Towards Relevance and Sequence Modeling in Language Recognition
The task of automatic language identification (LID) involving multiple
dialects of the same language family in the presence of noise is a challenging
problem. In these scenarios, the identity of the language/dialect may be
reliably present only in parts of the temporal sequence of the speech signal.
The conventional approaches to LID (and for speaker recognition) ignore the
sequence information by extracting long-term statistical summary of the
recording assuming an independence of the feature frames. In this paper, we
propose a neural network framework utilizing short-sequence information in
language recognition. In particular, a new model is proposed for incorporating
relevance in language recognition, where parts of speech data are weighted more
based on their relevance for the language recognition task. This relevance
weighting is achieved using the bidirectional long short-term memory (BLSTM)
network with attention modeling. We explore two approaches, the first approach
uses segment level i-vector/x-vector representations that are aggregated in the
neural model and the second approach where the acoustic features are directly
modeled in an end-to-end neural model. Experiments are performed using the
language recognition task in NIST LRE 2017 Challenge using clean, noisy and
multi-speaker speech data as well as in the RATS language recognition corpus.
In these experiments on noisy LRE tasks as well as the RATS dataset, the
proposed approach yields significant improvements over the conventional
i-vector/x-vector based language recognition approaches as well as with other
previous models incorporating sequence information.Comment: https://github.com/iiscleap/lre-relevance-weighting Accepted to IEEE
Transactions on Audio, Speech and Language Processin