242 research outputs found
The NLMS algorithm with time-variant optimum stepsize derived from a Bayesian network perspective
In this article, we derive a new stepsize adaptation for the normalized least
mean square algorithm (NLMS) by describing the task of linear acoustic echo
cancellation from a Bayesian network perspective. Similar to the well-known
Kalman filter equations, we model the acoustic wave propagation from the
loudspeaker to the microphone by a latent state vector and define a linear
observation equation (to model the relation between the state vector and the
observation) as well as a linear process equation (to model the temporal
progress of the state vector). Based on additional assumptions on the
statistics of the random variables in observation and process equation, we
apply the expectation-maximization (EM) algorithm to derive an NLMS-like filter
adaptation. By exploiting the conditional independence rules for Bayesian
networks, we reveal that the resulting EM-NLMS algorithm has a stepsize update
equivalent to the optimal-stepsize calculation proposed by Yamamoto and
Kitayama in 1982, which has been adopted in many textbooks. As main difference,
the instantaneous stepsize value is estimated in the M step of the EM algorithm
(instead of being approximated by artificially extending the acoustic echo
path). The EM-NLMS algorithm is experimentally verified for synthesized
scenarios with both, white noise and male speech as input signal.Comment: 4 pages, 1 page of reference
A Bayesian Network View on Acoustic Model-Based Techniques for Robust Speech Recognition
This article provides a unifying Bayesian network view on various approaches
for acoustic model adaptation, missing feature, and uncertainty decoding that
are well-known in the literature of robust automatic speech recognition. The
representatives of these classes can often be deduced from a Bayesian network
that extends the conventional hidden Markov models used in speech recognition.
These extensions, in turn, can in many cases be motivated from an underlying
observation model that relates clean and distorted feature vectors. By
converting the observation models into a Bayesian network representation, we
formulate the corresponding compensation rules leading to a unified view on
known derivations as well as to new formulations for certain approaches. The
generic Bayesian perspective provided in this contribution thus highlights
structural differences and similarities between the analyzed approaches
- …