2,931 research outputs found
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
A Recurrent Encoder-Decoder Approach with Skip-filtering Connections for Monaural Singing Voice Separation
The objective of deep learning methods based on encoder-decoder architectures
for music source separation is to approximate either ideal time-frequency masks
or spectral representations of the target music source(s). The spectral
representations are then used to derive time-frequency masks. In this work we
introduce a method to directly learn time-frequency masks from an observed
mixture magnitude spectrum. We employ recurrent neural networks and train them
using prior knowledge only for the magnitude spectrum of the target source. To
assess the performance of the proposed method, we focus on the task of singing
voice separation. The results from an objective evaluation show that our
proposed method provides comparable results to deep learning based methods
which operate over complicated signal representations. Compared to previous
methods that approximate time-frequency masks, our method has increased
performance of signal to distortion ratio by an average of 3.8 dB
Deep Clustering and Conventional Networks for Music Separation: Stronger Together
Deep clustering is the first method to handle general audio separation
scenarios with multiple sources of the same type and an arbitrary number of
sources, performing impressively in speaker-independent speech separation
tasks. However, little is known about its effectiveness in other challenging
situations such as music source separation. Contrary to conventional networks
that directly estimate the source signals, deep clustering generates an
embedding for each time-frequency bin, and separates sources by clustering the
bins in the embedding space. We show that deep clustering outperforms
conventional networks on a singing voice separation task, in both matched and
mismatched conditions, even though conventional networks have the advantage of
end-to-end training for best signal approximation, presumably because its more
flexible objective engenders better regularization. Since the strengths of deep
clustering and conventional network architectures appear complementary, we
explore combining them in a single hybrid network trained via an approach akin
to multi-task learning. Remarkably, the combination significantly outperforms
either of its components.Comment: Published in ICASSP 201
- …