1,691 research outputs found
Robust equalization of multichannel acoustic systems
In most real-world acoustical scenarios, speech signals captured by distant microphones from a source are reverberated due to multipath propagation, and the reverberation may impair speech intelligibility. Speech dereverberation can be achieved
by equalizing the channels from the source to microphones. Equalization systems can
be computed using estimates of multichannel acoustic impulse responses. However,
the estimates obtained from system identification always include errors; the fact that
an equalization system is able to equalize the estimated multichannel acoustic system does not mean that it is able to equalize the true system. The objective of this
thesis is to propose and investigate robust equalization methods for multichannel
acoustic systems in the presence of system identification errors.
Equalization systems can be computed using the multiple-input/output inverse theorem or multichannel least-squares method. However, equalization systems
obtained from these methods are very sensitive to system identification errors. A
study of the multichannel least-squares method with respect to two classes of characteristic channel zeros is conducted. Accordingly, a relaxed multichannel least-
squares method is proposed. Channel shortening in connection with the multiple-
input/output inverse theorem and the relaxed multichannel least-squares method is
discussed.
Two algorithms taking into account the system identification errors are developed. Firstly, an optimally-stopped weighted conjugate gradient algorithm is
proposed. A conjugate gradient iterative method is employed to compute the equalization system. The iteration process is stopped optimally with respect to system identification errors. Secondly, a system-identification-error-robust equalization
method exploring the use of error models is presented, which incorporates system
identification error models in the weighted multichannel least-squares formulation
System Identification with Applications in Speech Enhancement
As the increasing popularity of integrating hands-free telephony on mobile portable devices
and the rapid development of voice over internet protocol, identification of acoustic
systems has become desirable for compensating distortions introduced to speech signals
during transmission, and hence enhancing the speech quality. The objective of this research
is to develop system identification algorithms for speech enhancement applications
including network echo cancellation and speech dereverberation.
A supervised adaptive algorithm for sparse system identification is developed for
network echo cancellation. Based on the framework of selective-tap updating scheme
on the normalized least mean squares algorithm, the MMax and sparse partial update
tap-selection strategies are exploited in the frequency domain to achieve fast convergence
performance with low computational complexity. Through demonstrating how
the sparseness of the network impulse response varies in the transformed domain, the
multidelay filtering structure is incorporated to reduce the algorithmic delay.
Blind identification of SIMO acoustic systems for speech dereverberation in the
presence of common zeros is then investigated. First, the problem of common zeros is
defined and extended to include the presence of near-common zeros. Two clustering algorithms
are developed to quantify the number of these zeros so as to facilitate the study
of their effect on blind system identification and speech dereverberation. To mitigate such
effect, two algorithms are developed where the two-stage algorithm based on channel
decomposition identifies common and non-common zeros sequentially; and the forced
spectral diversity approach combines spectral shaping filters and channel undermodelling
for deriving a modified system that leads to an improved dereverberation performance.
Additionally, a solution to the scale factor ambiguity problem in subband-based blind system identification is developed, which motivates further research on subbandbased
dereverberation techniques. Comprehensive simulations and discussions demonstrate
the effectiveness of the aforementioned algorithms. A discussion on possible directions
of prospective research on system identification techniques concludes this thesis
Sparseness-controlled adaptive algorithms for supervised and unsupervised system identification
In single-channel hands-free telephony, the acoustic coupling between the loudspeaker and
the microphone can be strong and this generates echoes that can degrade user experience.
Therefore, effective acoustic echo cancellation (AEC) is necessary to maintain a stable
system and hence improve the perceived voice quality of a call. Traditionally, adaptive
filters have been deployed in acoustic echo cancellers to estimate the acoustic impulse
responses (AIRs) using adaptive algorithms. The performances of a range of well-known
algorithms are studied in the context of both AEC and network echo cancellation (NEC).
It presents insights into their tracking performances under both time-invariant and time-varying
system conditions.
In the context of AEC, the level of sparseness in AIRs can vary greatly in a mobile
environment. When the response is strongly sparse, convergence of conventional
approaches is poor. Drawing on techniques originally developed for NEC, a class of time-domain
and a frequency-domain AEC algorithms are proposed that can not only work
well in both sparse and dispersive circumstances, but also adapt dynamically to the level
of sparseness using a new sparseness-controlled approach.
As it will be shown later that the early part of the acoustic echo path is sparse
while the late reverberant part of the acoustic path is dispersive, a novel approach to
an adaptive filter structure that consists of two time-domain partition blocks is proposed
such that different adaptive algorithms can be used for each part. By properly controlling
the mixing parameter for the partitioned blocks separately, where the block lengths are
controlled adaptively, the proposed partitioned block algorithm works well in both sparse
and dispersive time-varying circumstances.
A new insight into an analysis on the tracking performance of improved proportionate
NLMS (IPNLMS) is presented by deriving the expression for the mean-square error.
By employing the framework for both sparse and dispersive time-varying echo paths, this
work validates the analytic results in practical simulations for AEC.
The time-domain second-order statistic based blind SIMO identification algorithms,
which exploit the cross relation method, are investigated and then a technique with proportionate
step-size control for both sparse and dispersive system identification is also
developed
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.Comment: 31 page
Block-Online Multi-Channel Speech Enhancement Using DNN-Supported Relative Transfer Function Estimates
This work addresses the problem of block-online processing for multi-channel
speech enhancement. Such processing is vital in scenarios with moving speakers
and/or when very short utterances are processed, e.g., in voice assistant
scenarios. We consider several variants of a system that performs beamforming
supported by DNN-based voice activity detection (VAD) followed by
post-filtering. The speaker is targeted through estimating relative transfer
functions between microphones. Each block of the input signals is processed
independently in order to make the method applicable in highly dynamic
environments. Owing to the short length of the processed block, the statistics
required by the beamformer are estimated less precisely. The influence of this
inaccuracy is studied and compared to the processing regime when recordings are
treated as one block (batch processing). The experimental evaluation of the
proposed method is performed on large datasets of CHiME-4 and on another
dataset featuring moving target speaker. The experiments are evaluated in terms
of objective and perceptual criteria (such as signal-to-interference ratio
(SIR) or perceptual evaluation of speech quality (PESQ), respectively).
Moreover, word error rate (WER) achieved by a baseline automatic speech
recognition system is evaluated, for which the enhancement method serves as a
front-end solution. The results indicate that the proposed method is robust
with respect to short length of the processed block. Significant improvements
in terms of the criteria and WER are observed even for the block length of 250
ms.Comment: 10 pages, 8 figures, 4 tables. Modified version of the article
accepted for publication in IET Signal Processing journal. Original results
unchanged, additional experiments presented, refined discussion and
conclusion
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
A Noise-Robust Method with Smoothed \ell_1/\ell_2 Regularization for Sparse Moving-Source Mapping
The method described here performs blind deconvolution of the beamforming
output in the frequency domain. To provide accurate blind deconvolution,
sparsity priors are introduced with a smooth \ell_1/\ell_2 regularization term.
As the mean of the noise in the power spectrum domain is dependent on its
variance in the time domain, the proposed method includes a variance estimation
step, which allows more robust blind deconvolution. Validation of the method on
both simulated and real data, and of its performance, are compared with two
well-known methods from the literature: the deconvolution approach for the
mapping of acoustic sources, and sound density modeling
- …