3,648 research outputs found
Fusing ASR Outputs in Joint Training for Speech Emotion Recognition
Alongside acoustic information, linguistic features based on speech
transcripts have been proven useful in Speech Emotion Recognition (SER).
However, due to the scarcity of emotion labelled data and the difficulty of
recognizing emotional speech, it is hard to obtain reliable linguistic features
and models in this research area. In this paper, we propose to fuse Automatic
Speech Recognition (ASR) outputs into the pipeline for joint training SER. The
relationship between ASR and SER is understudied, and it is unclear what and
how ASR features benefit SER. By examining various ASR outputs and fusion
methods, our experiments show that in joint ASR-SER training, incorporating
both ASR hidden and text output using a hierarchical co-attention fusion
approach improves the SER performance the most. On the IEMOCAP corpus, our
approach achieves 63.4% weighted accuracy, which is close to the baseline
results achieved by combining ground-truth transcripts. In addition, we also
present novel word error rate analysis on IEMOCAP and layer-difference analysis
of the Wav2vec 2.0 model to better understand the relationship between ASR and
SER.Comment: Accepted for ICASSP 202
Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning
Various psychological factors affect how individuals express emotions. Yet,
when we collect data intended for use in building emotion recognition systems,
we often try to do so by creating paradigms that are designed just with a focus
on eliciting emotional behavior. Algorithms trained with these types of data
are unlikely to function outside of controlled environments because our
emotions naturally change as a function of these other factors. In this work,
we study how the multimodal expressions of emotion change when an individual is
under varying levels of stress. We hypothesize that stress produces modulations
that can hide the true underlying emotions of individuals and that we can make
emotion recognition algorithms more generalizable by controlling for variations
in stress. To this end, we use adversarial networks to decorrelate stress
modulations from emotion representations. We study how stress alters acoustic
and lexical emotional predictions, paying special attention to how modulations
due to stress affect the transferability of learned emotion recognition models
across domains. Our results show that stress is indeed encoded in trained
emotion classifiers and that this encoding varies across levels of emotions and
across the lexical and acoustic modalities. Our results also show that emotion
recognition models that control for stress during training have better
generalizability when applied to new domains, compared to models that do not
control for stress during training. We conclude that is is necessary to
consider the effect of extraneous psychological factors when building and
testing emotion recognition models.Comment: 10 pages, ICMI 201
Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Speech data has rich acoustic and paralinguistic information with important
cues for understanding a speaker's tone, emotion, and intent, yet traditional
large language models such as BERT do not incorporate this information. There
has been an increased interest in multi-modal language models leveraging audio
and/or visual information and text. However, current multi-modal language
models require both text and audio/visual data streams during inference/test
time. In this work, we propose a methodology for training language models
leveraging spoken language audio data but without requiring the audio stream
during prediction time. This leads to an improved language model for analyzing
spoken transcripts while avoiding an audio processing overhead at test time. We
achieve this via an audio-language knowledge distillation framework, where we
transfer acoustic and paralinguistic information from a pre-trained speech
embedding (OpenAI Whisper) teacher model to help train a student language model
on an audio-text dataset. In our experiments, the student model achieves
consistent improvement over traditional language models on tasks analyzing
spoken transcripts.Comment: 11 page
Deep learning techniques for computer audition
Automatically recognising audio signals plays a crucial role in the development of intelligent computer audition systems. Particularly, audio signal classification, which aims to predict a label for an audio wave, has promoted many real-life applications. Amounts of efforts have been made to develop effective audio signal classification systems in the real world. However, several challenges in deep learning techniques for audio signal classification remain to be addressed. For instance, training a deep neural network (DNN) from scratch is time-consuming to extracting high-level deep representations. Furthermore, DNNs have not been well explained to construct the trust between humans and machines, and facilitate developing realistic intelligent systems. Moreover, most DNNs are vulnerable to adversarial attacks, resulting in many misclassifications.
To deal with these challenges, this thesis proposes and presents a set of deep-learning-based approaches for audio signal classification. In particular, to tackle the challenge of extracting high-level deep representations, the transfer learning frameworks, benefiting from pre-trained models on large-scale image datasets, are introduced to produce effective deep spectrum representations. Furthermore, the attention mechanisms at both the frame level and the time-frequency level are proposed to explain the DNNs by respectively estimating the contributions of each frame and each time-frequency bin to the predictions. Likewise, the convolutional neural networks (CNNs) with an attention mechanism at the time-frequency level is extended to atrous CNNs with attention, aiming to explain the CNNs by visualising high-resolution attention tensors. Additionally, to interpret the CNNs evaluated on multi-device datasets, the atrous CNNs with attention are trained in the conditional training frameworks. Moreover, to improve the robustness of the DNNs against adversarial attacks, models are trained in the adversarial training frameworks. Besides, the transferability of adversarial attacks is enhanced by a lifelong learning framework. Finally, the experiments conducted with various datasets demonstrate that these presented approaches are effective to address the challenges
- …