9,706 research outputs found
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems
Speech emotion recognition is an important component of any human centered
system. But speech characteristics produced and perceived by a person can be
influenced by a multitude of reasons, both desirable such as emotion, and
undesirable such as noise. To train robust emotion recognition models, we need
a large, yet realistic data distribution, but emotion datasets are often small
and hence are augmented with noise. Often noise augmentation makes one
important assumption, that the prediction label should remain the same in
presence or absence of noise, which is true for automatic speech recognition
but not necessarily true for perception based tasks. In this paper we make
three novel contributions. We validate through crowdsourcing that the presence
of noise does change the annotation label and hence may alter the original
ground truth label. We then show how disregarding this knowledge and assuming
consistency in ground truth labels propagates to downstream evaluation of ML
models, both for performance evaluation and robustness testing. We end the
paper with a set of recommendations for noise augmentations in speech emotion
recognition datasets
Stress and Emotion Classification Using Jitter and Shimmer Features
In this paper, we evaluate the use of appended jitter and shimmer speech features for the classification of human speaking styles and of animal vocalization arousal levels. Jitter and shimmer features are extracted from the fundamental frequency contour and added to baseline spectral features, specifically Mel-frequency cepstral coefficients (MFCCs) for human speech and Greenwood function cepstral coefficients (GFCCs) for animal vocalizations. Hidden Markov models (HMMs) with Gaussian mixture models (GMMs) state distributions are used for classification. The appended jitter and shimmer features result in an increase in classification accuracy for several illustrative datasets, including the SUSAS dataset for human speaking styles as well as vocalizations labeled by arousal level for African elephant and Rhesus monkey species
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
End-to-end Recurrent Denoising Autoencoder Embeddings for Speaker Identification
Speech 'in-the-wild' is a handicap for speaker recognition systems due to the
variability induced by real-life conditions, such as environmental noise and
emotions in the speaker. Taking advantage of representation learning, on this
paper we aim to design a recurrent denoising autoencoder that extracts robust
speaker embeddings from noisy spectrograms to perform speaker identification.
The end-to-end proposed architecture uses a feedback loop to encode information
regarding the speaker into low-dimensional representations extracted by a
spectrogram denoising autoencoder. We employ data augmentation techniques by
additively corrupting clean speech with real life environmental noise and make
use of a database with real stressed speech. We prove that the joint
optimization of both the denoiser and the speaker identification module
outperforms independent optimization of both modules under stress and noise
distortions as well as hand-crafted features.Comment: 8 pages + 2 of references + 5 of images. Submitted on Monday 20th of
July to Elsevier Signal Processing Short Communication
End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent Neural Models
Speech activity detection (SAD) plays an important role in current speech
processing systems, including automatic speech recognition (ASR). SAD is
particularly difficult in environments with acoustic noise. A practical
solution is to incorporate visual information, increasing the robustness of the
SAD approach. An audiovisual system has the advantage of being robust to
different speech modes (e.g., whisper speech) or background noise. Recent
advances in audiovisual speech processing using deep learning have opened
opportunities to capture in a principled way the temporal relationships between
acoustic and visual features. This study explores this idea proposing a
\emph{bimodal recurrent neural network} (BRNN) framework for SAD. The approach
models the temporal dynamic of the sequential audiovisual data, improving the
accuracy and robustness of the proposed SAD system. Instead of estimating
hand-crafted features, the study investigates an end-to-end training approach,
where acoustic and visual features are directly learned from the raw data
during training. The experimental evaluation considers a large audiovisual
corpus with over 60.8 hours of recordings, collected from 105 speakers. The
results demonstrate that the proposed framework leads to absolute improvements
up to 1.2% under practical scenarios over a VAD baseline using only audio
implemented with deep neural network (DNN). The proposed approach achieves
92.7% F1-score when it is evaluated using the sensors from a portable tablet
under noisy acoustic environment, which is only 1.0% lower than the performance
obtained under ideal conditions (e.g., clean speech obtained with a high
definition camera and a close-talking microphone).Comment: Submitted to Speech Communicatio
- …