58 research outputs found
Semi-Supervised Speech Emotion Recognition with Ladder Networks
Speech emotion recognition (SER) systems find applications in various fields
such as healthcare, education, and security and defense. A major drawback of
these systems is their lack of generalization across different conditions. This
problem can be solved by training models on large amounts of labeled data from
the target domain, which is expensive and time-consuming. Another approach is
to increase the generalization of the models. An effective way to achieve this
goal is by regularizing the models through multitask learning (MTL), where
auxiliary tasks are learned along with the primary task. These methods often
require the use of labeled data which is computationally expensive to collect
for emotion recognition (gender, speaker identity, age or other emotional
descriptors). This study proposes the use of ladder networks for emotion
recognition, which utilizes an unsupervised auxiliary task. The primary task is
a regression problem to predict emotional attributes. The auxiliary task is the
reconstruction of intermediate feature representations using a denoising
autoencoder. This auxiliary task does not require labels so it is possible to
train the framework in a semi-supervised fashion with abundant unlabeled data
from the target domain. This study shows that the proposed approach creates a
powerful framework for SER, achieving superior performance than fully
supervised single-task learning (STL) and MTL baselines. The approach is
implemented with several acoustic features, showing that ladder networks
generalize significantly better in cross-corpus settings. Compared to the STL
baselines, the proposed approach achieves relative gains in concordance
correlation coefficient (CCC) between 3.0% and 3.5% for within corpus
evaluations, and between 16.1% and 74.1% for cross corpus evaluations,
highlighting the power of the architecture
End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent Neural Models
Speech activity detection (SAD) plays an important role in current speech
processing systems, including automatic speech recognition (ASR). SAD is
particularly difficult in environments with acoustic noise. A practical
solution is to incorporate visual information, increasing the robustness of the
SAD approach. An audiovisual system has the advantage of being robust to
different speech modes (e.g., whisper speech) or background noise. Recent
advances in audiovisual speech processing using deep learning have opened
opportunities to capture in a principled way the temporal relationships between
acoustic and visual features. This study explores this idea proposing a
\emph{bimodal recurrent neural network} (BRNN) framework for SAD. The approach
models the temporal dynamic of the sequential audiovisual data, improving the
accuracy and robustness of the proposed SAD system. Instead of estimating
hand-crafted features, the study investigates an end-to-end training approach,
where acoustic and visual features are directly learned from the raw data
during training. The experimental evaluation considers a large audiovisual
corpus with over 60.8 hours of recordings, collected from 105 speakers. The
results demonstrate that the proposed framework leads to absolute improvements
up to 1.2% under practical scenarios over a VAD baseline using only audio
implemented with deep neural network (DNN). The proposed approach achieves
92.7% F1-score when it is evaluated using the sensors from a portable tablet
under noisy acoustic environment, which is only 1.0% lower than the performance
obtained under ideal conditions (e.g., clean speech obtained with a high
definition camera and a close-talking microphone).Comment: Submitted to Speech Communicatio
Learning General Audio Representations with Large-Scale Training of Patchout Audio Transformers
The success of supervised deep learning methods is largely due to their
ability to learn relevant features from raw data. Deep Neural Networks (DNNs)
trained on large-scale datasets are capable of capturing a diverse set of
features, and learning a representation that can generalize onto unseen tasks
and datasets that are from the same domain. Hence, these models can be used as
powerful feature extractors, in combination with shallower models as
classifiers, for smaller tasks and datasets where the amount of training data
is insufficient for learning an end-to-end model from scratch. During the past
years, Convolutional Neural Networks (CNNs) have largely been the method of
choice for audio processing. However, recently attention-based transformer
models have demonstrated great potential in supervised settings, outperforming
CNNs. In this work, we investigate the use of audio transformers trained on
large-scale datasets to learn general-purpose representations. We study how the
different setups in these audio transformers affect the quality of their
embeddings. We experiment with the models' time resolution, extracted embedding
level, and receptive fields in order to see how they affect performance on a
variety of tasks and datasets, following the HEAR 2021 NeurIPS challenge
evaluation setup. Our results show that representations extracted by audio
transformers outperform CNN representations. Furthermore, we will show that
transformers trained on Audioset can be extremely effective representation
extractors for a wide range of downstream tasks.Comment: will apear in HEAR: Holistic Evaluation of Audio Representations
Proceedings of Machine Learning Research PMLR 166. Source code:
https://github.com/kkoutini/passt_hear2
Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation
Recent advancements in diffusion models and large language models (LLMs) have
significantly propelled the field of AIGC. Text-to-Audio (TTA), a burgeoning
AIGC application designed to generate audio from natural language prompts, is
attracting increasing attention. However, existing TTA studies often struggle
with generation quality and text-audio alignment, especially for complex
textual inputs. Drawing inspiration from state-of-the-art Text-to-Image (T2I)
diffusion models, we introduce Auffusion, a TTA system adapting T2I model
frameworks to TTA task, by effectively leveraging their inherent generative
strengths and precise cross-modal alignment. Our objective and subjective
evaluations demonstrate that Auffusion surpasses previous TTA approaches using
limited data and computational resource. Furthermore, previous studies in T2I
recognizes the significant impact of encoder choice on cross-modal alignment,
like fine-grained details and object bindings, while similar evaluation is
lacking in prior TTA works. Through comprehensive ablation studies and
innovative cross-attention map visualizations, we provide insightful
assessments of text-audio alignment in TTA. Our findings reveal Auffusion's
superior capability in generating audios that accurately match textual
descriptions, which further demonstrated in several related tasks, such as
audio style transfer, inpainting and other manipulations. Our implementation
and demos are available at https://auffusion.github.io.Comment: Demo and implementation at https://auffusion.github.i
- …