494 research outputs found
On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement
Many deep learning-based speech enhancement algorithms are designed to
minimize the mean-square error (MSE) in some transform domain between a
predicted and a target speech signal. However, optimizing for MSE does not
necessarily guarantee high speech quality or intelligibility, which is the
ultimate goal of many speech enhancement algorithms. Additionally, only little
is known about the impact of the loss function on the emerging class of
time-domain deep learning-based speech enhancement systems. We study how
popular loss functions influence the performance of deep learning-based speech
enhancement systems. First, we demonstrate that perceptually inspired loss
functions might be advantageous if the receiver is the human auditory system.
Furthermore, we show that the learning rate is a crucial design parameter even
for adaptive gradient-based optimizers, which has been generally overlooked in
the literature. Also, we found that waveform matching performance metrics must
be used with caution as they in certain situations can fail completely.
Finally, we show that a loss function based on scale-invariant
signal-to-distortion ratio (SI-SDR) achieves good general performance across a
range of popular speech enhancement evaluation metrics, which suggests that
SI-SDR is a good candidate as a general-purpose loss function for speech
enhancement systems.Comment: Published in the IEEE Transactions on Audio, Speech and Language
Processin
Direction of Arrival with One Microphone, a few LEGOs, and Non-Negative Matrix Factorization
Conventional approaches to sound source localization require at least two
microphones. It is known, however, that people with unilateral hearing loss can
also localize sounds. Monaural localization is possible thanks to the
scattering by the head, though it hinges on learning the spectra of the various
sources. We take inspiration from this human ability to propose algorithms for
accurate sound source localization using a single microphone embedded in an
arbitrary scattering structure. The structure modifies the frequency response
of the microphone in a direction-dependent way giving each direction a
signature. While knowing those signatures is sufficient to localize sources of
white noise, localizing speech is much more challenging: it is an ill-posed
inverse problem which we regularize by prior knowledge in the form of learned
non-negative dictionaries. We demonstrate a monaural speech localization
algorithm based on non-negative matrix factorization that does not depend on
sophisticated, designed scatterers. In fact, we show experimental results with
ad hoc scatterers made of LEGO bricks. Even with these rudimentary structures
we can accurately localize arbitrary speakers; that is, we do not need to learn
the dictionary for the particular speaker to be localized. Finally, we discuss
multi-source localization and the related limitations of our approach.Comment: This article has been accepted for publication in IEEE/ACM
Transactions on Audio, Speech, and Language processing (TASLP
Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks
In this paper we propose the utterance-level Permutation Invariant Training
(uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning
based solution for speaker independent multi-talker speech separation.
Specifically, uPIT extends the recently proposed Permutation Invariant Training
(PIT) technique with an utterance-level cost function, hence eliminating the
need for solving an additional permutation problem during inference, which is
otherwise required by frame-level PIT. We achieve this using Recurrent Neural
Networks (RNNs) that, during training, minimize the utterance-level separation
error, hence forcing separated frames belonging to the same speaker to be
aligned to the same output stream. In practice, this allows RNNs, trained with
uPIT, to separate multi-talker mixed speech without any prior knowledge of
signal duration, number of speakers, speaker identity or gender. We evaluated
uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks
and found that uPIT outperforms techniques based on Non-negative Matrix
Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and
compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network
(DANet). Furthermore, we found that models trained with uPIT generalize well to
unseen speakers and languages. Finally, we found that a single model, trained
with uPIT, can handle both two-speaker, and three-speaker speech mixtures
- …