334 research outputs found
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
End-to-end Recurrent Denoising Autoencoder Embeddings for Speaker Identification
Speech 'in-the-wild' is a handicap for speaker recognition systems due to the
variability induced by real-life conditions, such as environmental noise and
emotions in the speaker. Taking advantage of representation learning, on this
paper we aim to design a recurrent denoising autoencoder that extracts robust
speaker embeddings from noisy spectrograms to perform speaker identification.
The end-to-end proposed architecture uses a feedback loop to encode information
regarding the speaker into low-dimensional representations extracted by a
spectrogram denoising autoencoder. We employ data augmentation techniques by
additively corrupting clean speech with real life environmental noise and make
use of a database with real stressed speech. We prove that the joint
optimization of both the denoiser and the speaker identification module
outperforms independent optimization of both modules under stress and noise
distortions as well as hand-crafted features.Comment: 8 pages + 2 of references + 5 of images. Submitted on Monday 20th of
July to Elsevier Signal Processing Short Communication
Deep neural network techniques for monaural speech enhancement: state of the art analysis
Deep neural networks (DNN) techniques have become pervasive in domains such
as natural language processing and computer vision. They have achieved great
success in these domains in task such as machine translation and image
generation. Due to their success, these data driven techniques have been
applied in audio domain. More specifically, DNN models have been applied in
speech enhancement domain to achieve denosing, dereverberation and
multi-speaker separation in monaural speech enhancement. In this paper, we
review some dominant DNN techniques being employed to achieve speech
separation. The review looks at the whole pipeline of speech enhancement from
feature extraction, how DNN based tools are modelling both global and local
features of speech and model training (supervised and unsupervised). We also
review the use of speech-enhancement pre-trained models to boost speech
enhancement process. The review is geared towards covering the dominant trends
with regards to DNN application in speech enhancement in speech obtained via a
single speaker.Comment: conferenc
End-to-End Multi-Task Denoising for joint SDR and PESQ Optimization
Supervised learning based on a deep neural network recently has achieved
substantial improvement on speech enhancement. Denoising networks learn mapping
from noisy speech to clean one directly, or to a spectrum mask which is the
ratio between clean and noisy spectra. In either case, the network is optimized
by minimizing mean square error (MSE) between ground-truth labels and
time-domain or spectrum output. However, existing schemes have either of two
critical issues: spectrum and metric mismatches. The spectrum mismatch is a
well known issue that any spectrum modification after short-time Fourier
transform (STFT), in general, cannot be fully recovered after inverse
short-time Fourier transform (ISTFT). The metric mismatch is that a
conventional MSE metric is sub-optimal to maximize our target metrics,
signal-to-distortion ratio (SDR) and perceptual evaluation of speech quality
(PESQ). This paper presents a new end-to-end denoising framework with the goal
of joint SDR and PESQ optimization. First, the network optimization is
performed on the time-domain signals after ISTFT to avoid spectrum mismatch.
Second, two loss functions which have improved correlations with SDR and PESQ
metrics are proposed to minimize metric mismatch. The experimental result
showed that the proposed denoising scheme significantly improved both SDR and
PESQ performance over the existing methods
- …