309 research outputs found
Block-Online Multi-Channel Speech Enhancement Using DNN-Supported Relative Transfer Function Estimates
This work addresses the problem of block-online processing for multi-channel
speech enhancement. Such processing is vital in scenarios with moving speakers
and/or when very short utterances are processed, e.g., in voice assistant
scenarios. We consider several variants of a system that performs beamforming
supported by DNN-based voice activity detection (VAD) followed by
post-filtering. The speaker is targeted through estimating relative transfer
functions between microphones. Each block of the input signals is processed
independently in order to make the method applicable in highly dynamic
environments. Owing to the short length of the processed block, the statistics
required by the beamformer are estimated less precisely. The influence of this
inaccuracy is studied and compared to the processing regime when recordings are
treated as one block (batch processing). The experimental evaluation of the
proposed method is performed on large datasets of CHiME-4 and on another
dataset featuring moving target speaker. The experiments are evaluated in terms
of objective and perceptual criteria (such as signal-to-interference ratio
(SIR) or perceptual evaluation of speech quality (PESQ), respectively).
Moreover, word error rate (WER) achieved by a baseline automatic speech
recognition system is evaluated, for which the enhancement method serves as a
front-end solution. The results indicate that the proposed method is robust
with respect to short length of the processed block. Significant improvements
in terms of the criteria and WER are observed even for the block length of 250
ms.Comment: 10 pages, 8 figures, 4 tables. Modified version of the article
accepted for publication in IET Signal Processing journal. Original results
unchanged, additional experiments presented, refined discussion and
conclusion
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
Mixture of beamformers for speech separation and extraction
In many audio applications, the signal of interest is corrupted by acoustic background noise,
interference, and reverberation. The presence of these contaminations can significantly degrade
the quality and intelligibility of the audio signal. This makes it important to develop signal
processing methods that can separate the competing sources and extract a source of interest.
The estimated signals may then be either directly listened to, transmitted, or further processed,
giving rise to a wide range of applications such as hearing aids, noise-cancelling headphones,
human-computer interaction, surveillance, and hands-free telephony.
Many of the existing approaches to speech separation/extraction relied on beamforming techniques.
These techniques approach the problem from a spatial point of view; a microphone
array is used to form a spatial filter which can extract a signal from a specific direction and
reduce the contamination of signals from other directions. However, when there are fewer
microphones than sources (the underdetermined case), perfect attenuation of all interferers becomes
impossible and only partial interference attenuation is possible.
In this thesis, we present a framework which extends the use of beamforming techniques to
underdetermined speech mixtures. We describe frequency domain non-linear mixture of beamformers
that can extract a speech source from a known direction. Our approach models the
data in each frequency bin via Gaussian mixture distributions, which can be learned using the
expectation maximization algorithm. The model learning is performed using the observed mixture
signals only, and no prior training is required. The signal estimator comprises of a set of
minimum mean square error (MMSE), minimum variance distortionless response (MVDR), or
minimum power distortionless response (MPDR) beamformers. In order to estimate the signal,
all beamformers are concurrently applied to the observed signal, and the weighted sum of
the beamformers’ outputs is used as the signal estimator, where the weights are the estimated
posterior probabilities of the Gaussian mixture states. These weights are specific to each timefrequency
point. The resulting non-linear beamformers do not need to know or estimate the
number of sources, and can be applied to microphone arrays with two or more microphones
with arbitrary array configuration. We test and evaluate the described methods on underdetermined
speech mixtures. Experimental results for the non-linear beamformers in underdetermined
mixtures with room reverberation confirm their capability to successfully extract speech
sources
- …