70 research outputs found
Mixture of beamformers for speech separation and extraction
In many audio applications, the signal of interest is corrupted by acoustic background noise,
interference, and reverberation. The presence of these contaminations can significantly degrade
the quality and intelligibility of the audio signal. This makes it important to develop signal
processing methods that can separate the competing sources and extract a source of interest.
The estimated signals may then be either directly listened to, transmitted, or further processed,
giving rise to a wide range of applications such as hearing aids, noise-cancelling headphones,
human-computer interaction, surveillance, and hands-free telephony.
Many of the existing approaches to speech separation/extraction relied on beamforming techniques.
These techniques approach the problem from a spatial point of view; a microphone
array is used to form a spatial filter which can extract a signal from a specific direction and
reduce the contamination of signals from other directions. However, when there are fewer
microphones than sources (the underdetermined case), perfect attenuation of all interferers becomes
impossible and only partial interference attenuation is possible.
In this thesis, we present a framework which extends the use of beamforming techniques to
underdetermined speech mixtures. We describe frequency domain non-linear mixture of beamformers
that can extract a speech source from a known direction. Our approach models the
data in each frequency bin via Gaussian mixture distributions, which can be learned using the
expectation maximization algorithm. The model learning is performed using the observed mixture
signals only, and no prior training is required. The signal estimator comprises of a set of
minimum mean square error (MMSE), minimum variance distortionless response (MVDR), or
minimum power distortionless response (MPDR) beamformers. In order to estimate the signal,
all beamformers are concurrently applied to the observed signal, and the weighted sum of
the beamformersâ outputs is used as the signal estimator, where the weights are the estimated
posterior probabilities of the Gaussian mixture states. These weights are specific to each timefrequency
point. The resulting non-linear beamformers do not need to know or estimate the
number of sources, and can be applied to microphone arrays with two or more microphones
with arbitrary array configuration. We test and evaluate the described methods on underdetermined
speech mixtures. Experimental results for the non-linear beamformers in underdetermined
mixtures with room reverberation confirm their capability to successfully extract speech
sources
Speech Separation Using Partially Asynchronous Microphone Arrays Without Resampling
We consider the problem of separating speech sources captured by multiple
spatially separated devices, each of which has multiple microphones and samples
its signals at a slightly different rate. Most asynchronous array processing
methods rely on sample rate offset estimation and resampling, but these offsets
can be difficult to estimate if the sources or microphones are moving. We
propose a source separation method that does not require offset estimation or
signal resampling. Instead, we divide the distributed array into several
synchronous subarrays. All arrays are used jointly to estimate the time-varying
signal statistics, and those statistics are used to design separate
time-varying spatial filters in each array. We demonstrate the method for
speech mixtures recorded on both stationary and moving microphone arrays.Comment: To appear at the International Workshop on Acoustic Signal
Enhancement (IWAENC 2018
Audio source separation into the wild
International audienceThis review chapter is dedicated to multichannel audio source separation in real-life environment. We explore some of the major achievements in the field and discuss some of the remaining challenges. We will explore several important practical scenarios, e.g. moving sources and/or microphones, varying number of sources and sensors, high reverberation levels, spatially diffuse sources, and synchronization problems. Several applications such as smart assistants, cellular phones, hearing aids and robots, will be discussed. Our perspectives on the future of the field will be given as concluding remarks of this chapter
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.Comment: 31 page
Recommended from our members
End-to-end Speech Separation with Neural Networks
Speech separation has long been an active research topic in the signal processing community with its importance in a wide range of applications such as hearable devices and telecommunication systems. It not only serves as a fundamental problem for all higher-level speech processing tasks such as automatic speech recognition, natural language understanding, and smart personal assistants, but also plays an important role in smart earphones and augmented and virtual reality devices.
With the recent progress in deep neural networks, the separation performance has been significantly advanced by various new problem definitions and model architectures. The most widely-used approach in the past years performs separation in time-frequency domain, where a spectrogram or a time-frequency representation is first calculated from the mixture signal and multiple time-frequency masks are then estimated for the target sources. The masks are applied on the mixture's time-frequency representation to extract the target representations, and then operations such as inverse short-time Fourier transform is utilized to convert them back to waveforms. However, such frequency-domain methods may have difficulties in modeling the phase spectrogram as the conventional time-frequency masks often only consider the magnitude spectrogram. Moreover, the training objectives for the frequency-domain methods are typically also in frequency-domain, which may not be inline with widely-used time-domain evaluation metrics such as signal-to-noise ratio and signal-to-distortion ratio.
The problem formulation of time-domain, end-to-end speech separation naturally arises to tackle the disadvantages in the frequency-domain systems. The end-to-end speech separation networks take the mixture waveform as input and directly estimate the waveforms of the target sources. Following the general pipeline of conventional frequency-domain systems which contains a waveform encoder, a separator, and a waveform decoder, time-domain systems can be design in a similar way while significantly improves the separation performance.
In this dissertation, I focus on multiple aspects in the general problem formulation of end-to-end separation networks including the system designs, model architectures, and training objectives. I start with a single-channel pipeline, which we refer to as the time-domain audio separation network (TasNet), to validate the advantage of end-to-end separation comparing with the conventional time-frequency domain pipelines. I then move to the multi-channel scenario and introduce the filter-and-sum network (FaSNet) for both fixed-geometry and ad-hoc geometry microphone arrays.
Next I introduce methods for lightweight network architecture design that allows the models to maintain the separation performance while using only as small as 2.5% model size and 17.6% model complexity. After that, I look into the training objective functions for end-to-end speech separation and describe two training objectives for separating varying numbers of sources and improving the robustness under reverberant environments, respectively. Finally I take a step back and revisit several problem formulations in end-to-end separation pipeline and raise more questions in this framework to be further analyzed and investigated in future works
- âŠ