31 research outputs found
Rank-1 Constrained Multichannel Wiener Filter for Speech Recognition in Noisy Environments
Multichannel linear filters, such as the Multichannel Wiener Filter (MWF) and
the Generalized Eigenvalue (GEV) beamformer are popular signal processing
techniques which can improve speech recognition performance. In this paper, we
present an experimental study on these linear filters in a specific speech
recognition task, namely the CHiME-4 challenge, which features real recordings
in multiple noisy environments. Specifically, the rank-1 MWF is employed for
noise reduction and a new constant residual noise power constraint is derived
which enhances the recognition performance. To fulfill the underlying rank-1
assumption, the speech covariance matrix is reconstructed based on eigenvectors
or generalized eigenvectors. Then the rank-1 constrained MWF is evaluated with
alternative multichannel linear filters under the same framework, which
involves a Bidirectional Long Short-Term Memory (BLSTM) network for mask
estimation. The proposed filter outperforms alternative ones, leading to a 40%
relative Word Error Rate (WER) reduction compared with the baseline Weighted
Delay and Sum (WDAS) beamformer on the real test set, and a 15% relative WER
reduction compared with the GEV-BAN method. The results also suggest that the
speech recognition accuracy correlates more with the Mel-frequency cepstral
coefficients (MFCC) feature variance than with the noise reduction or the
speech distortion level.Comment: for Computer Speech and Languag
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
Similarity-and-Independence-Aware Beamformer: Method for Target Source Extraction using Magnitude Spectrogram as Reference
This study presents a novel method for source extraction, referred to as the
similarity-and-independence-aware beamformer (SIBF). The SIBF extracts the
target signal using a rough magnitude spectrogram as the reference signal. The
advantage of the SIBF is that it can obtain an accurate target signal, compared
to the spectrogram generated by target-enhancing methods such as the speech
enhancement based on deep neural networks (DNNs). For the extraction, we extend
the framework of the deflationary independent component analysis, by
considering the similarity between the reference and extracted target, as well
as the mutual independence of all potential sources. To solve the extraction
problem by maximum-likelihood estimation, we introduce two source model types
that can reflect the similarity. The experimental results from the CHiME3
dataset show that the target signal extracted by the SIBF is more accurate than
the reference signal generated by the DNN.
Index Terms: semiblind source separation, similarity-and-independence-aware
beamformer, deflationary independent component analysis, source modelComment: Accepted in INTERSPEECH 202
DNN-Based Multi-Frame MVDR Filtering for Single-Microphone Speech Enhancement
Multi-frame approaches for single-microphone speech enhancement, e.g., the
multi-frame minimum-variance-distortionless-response (MVDR) filter, are able to
exploit speech correlations across neighboring time frames. In contrast to
single-frame approaches such as the Wiener gain, it has been shown that
multi-frame approaches achieve a substantial noise reduction with hardly any
speech distortion, provided that an accurate estimate of the correlation
matrices and especially the speech interframe correlation vector is available.
Typical estimation procedures of the correlation matrices and the speech
interframe correlation (IFC) vector require an estimate of the speech presence
probability (SPP) in each time-frequency bin. In this paper, we propose to use
a bi-directional long short-term memory deep neural network (DNN) to estimate a
speech mask and a noise mask for each time-frequency bin, using which two
different SPP estimates are derived. Aiming at achieving a robust performance,
the DNN is trained for various noise types and signal-to-noise ratios.
Experimental results show that the multi-frame MVDR in combination with the
proposed data-driven SPP estimator yields an increased speech quality compared
to a state-of-the-art model-based estimator