1,060 research outputs found
Speech Separation Using Partially Asynchronous Microphone Arrays Without Resampling
We consider the problem of separating speech sources captured by multiple
spatially separated devices, each of which has multiple microphones and samples
its signals at a slightly different rate. Most asynchronous array processing
methods rely on sample rate offset estimation and resampling, but these offsets
can be difficult to estimate if the sources or microphones are moving. We
propose a source separation method that does not require offset estimation or
signal resampling. Instead, we divide the distributed array into several
synchronous subarrays. All arrays are used jointly to estimate the time-varying
signal statistics, and those statistics are used to design separate
time-varying spatial filters in each array. We demonstrate the method for
speech mixtures recorded on both stationary and moving microphone arrays.Comment: To appear at the International Workshop on Acoustic Signal
Enhancement (IWAENC 2018
Application of generative models in speech processing tasks
Generative probabilistic and neural models of the speech signal are shown to be effective in speech synthesis and speech enhancement, where generating natural and clean speech is the goal. This thesis develops two probabilistic signal processing algorithms based on the source-filter model of speech production, and two based on neural generative models of the speech signal. They are a model-based speech enhancement algorithm with ad-hoc microphone array, called GRAB; a probabilistic generative model of speech called PAT; a neural generative F0 model called TEReTA; and a Bayesian enhancement network, call BaWN, that incorporates a neural generative model of speech, called WaveNet. PAT and TEReTA aim to develop better generative models for speech synthesis. BaWN and GRAB aim to improve the naturalness and noise robustness of speech enhancement algorithms.
Probabilistic Acoustic Tube (PAT) is a probabilistic generative model for speech, whose basis is the source-filter model. The highlights of the model are threefold. First, it is among the very first works to build a complete probabilistic model for speech. Second, it has a well-designed model for the phase spectrum of speech, which has been hard to model and often neglected. Third, it models the AM-FM effects in speech, which are perceptually significant but often ignored in frame-based speech processing algorithms. Experiments show that the proposed model has good potential for a number of speech processing tasks.
TEReTA generates pitch contours by incorporating a theoretical model of pitch planning, the piece-wise linear target approximation (TA) model, as the output layer of a deep recurrent neural network. It aims to model semantic variations in the F0 contour, which is challenging for existing network. By combining the TA model, TEReTA is able to memorize semantic context and capture the semantic variations. Experiments on contrastive focus verify TEReTA's ability in semantics modeling.
BaWN is a neural network based algorithm for single-channel enhancement. The biggest challenges of the neural network based speech enhancement algorithm are the poor generalizability to unseen noises and unnaturalness of the output speech. By incorporating a neural generative model, WaveNet, in the Bayesian framework, where WaveNet predicts the prior for speech, and where a separate enhancement network incorporates the likelihood function, BaWN is able to achieve satisfactory generalizability and a good intelligibility score of its output, even when the noisy training set is small.
GRAB is a beamforming algorithm for ad-hoc microphone arrays. The task of enhancing speech with ad-hoc microphone array is challenging because of the inaccuracy in position and interference calibration. Inspired by the source-filter model, GRAB does not rely on any position or interference calibration. Instead, it incorporates a source-filter speech model and minimizes the energy that cannot be accounted for by the model. Objective and subjective evaluations on both simulated and real-world data show that GRAB is able to suppress noise effectively while keeping the speech natural and dry.
Final chapters discuss the implications of this work for future research in speech processing
Ad Hoc Microphone Array Calibration: Euclidean Distance Matrix Completion Algorithm and Theoretical Guarantees
This paper addresses the problem of ad hoc microphone array calibration where
only partial information about the distances between microphones is available.
We construct a matrix consisting of the pairwise distances and propose to
estimate the missing entries based on a novel Euclidean distance matrix
completion algorithm by alternative low-rank matrix completion and projection
onto the Euclidean distance space. This approach confines the recovered matrix
to the EDM cone at each iteration of the matrix completion algorithm. The
theoretical guarantees of the calibration performance are obtained considering
the random and locally structured missing entries as well as the measurement
noise on the known distances. This study elucidates the links between the
calibration error and the number of microphones along with the noise level and
the ratio of missing distances. Thorough experiments on real data recordings
and simulated setups are conducted to demonstrate these theoretical insights. A
significant improvement is achieved by the proposed Euclidean distance matrix
completion algorithm over the state-of-the-art techniques for ad hoc microphone
array calibration.Comment: In Press, available online, August 1, 2014.
http://www.sciencedirect.com/science/article/pii/S0165168414003508, Signal
Processing, 201
Spatial-temporal Graph Based Multi-channel Speaker Verification With Ad-hoc Microphone Arrays
The performance of speaker verification degrades significantly in adverse
acoustic environments with strong reverberation and noise. To address this
issue, this paper proposes a spatial-temporal graph convolutional network (GCN)
method for the multi-channel speaker verification with ad-hoc microphone
arrays. It includes a feature aggregation block and a channel selection block,
both of which are built on graphs. The feature aggregation block fuses speaker
features among different time and channels by a spatial-temporal GCN. The
graph-based channel selection block discards the noisy channels that may
contribute negatively to the system. The proposed method is flexible in
incorporating various kinds of graphs and prior knowledge. We compared the
proposed method with six representative methods in both real-world and
simulated environments.
Experimental results show that the proposed method achieves a relative equal
error rate (EER) reduction of lower than the strongest
referenced method in the simulated datasets, and lower than
the latter in the real datasets. Moreover, its performance is robust across
different signal-to-noise ratios and reverberation time
Time-domain Ad-hoc Array Speech Enhancement Using a Triple-path Network
Deep neural networks (DNNs) are very effective for multichannel speech
enhancement with fixed array geometries. However, it is not trivial to use DNNs
for ad-hoc arrays with unknown order and placement of microphones. We propose a
novel triple-path network for ad-hoc array processing in the time domain. The
key idea in the network design is to divide the overall processing into spatial
processing and temporal processing and use self-attention for spatial
processing. Using self-attention for spatial processing makes the network
invariant to the order and the number of microphones. The temporal processing
is done independently for all channels using a recently proposed dual-path
attentive recurrent network. The proposed network is a multiple-input
multiple-output architecture that can simultaneously enhance signals at all
microphones. Experimental results demonstrate the excellent performance of the
proposed approach. Further, we present analysis to demonstrate the
effectiveness of the proposed network in utilizing multichannel information
even from microphones at far locations.Comment: Accepted for publication in INTERSPEECH 202
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.Comment: 31 page
- …