230 research outputs found

    Underdetermined-order recursive least-squares adaptive filtering: The concept and algorithms

    No full text
    Published versio

    Low-complexity RLS algorithms using dichotomous coordinate descent iterations

    Get PDF
    In this paper, we derive low-complexity recursive least squares (RLS) adaptive filtering algorithms. We express the RLS problem in terms of auxiliary normal equations with respect to increments of the filter weights and apply this approach to the exponentially weighted and sliding window cases to derive new RLS techniques. For solving the auxiliary equations, line search methods are used. We first consider conjugate gradient iterations with a complexity of O(N-2) operations per sample; N being the number of the filter weights. To reduce the complexity and make the algorithms more suitable for finite precision implementation, we propose a new dichotomous coordinate descent (DCD) algorithm and apply it to the auxiliary equations. This results in a transversal RLS adaptive filter with complexity as low as 3N multiplications per sample, which is only slightly higher than the complexity of the least mean squares (LMS) algorithm (2N multiplications). Simulations are used to compare the performance of the proposed algorithms against the classical RLS and known advanced adaptive algorithms. Fixed-point FPGA implementation of the proposed DCD-based RLS algorithm is also discussed and results of such implementation are presented

    The NLMS algorithm with time-variant optimum stepsize derived from a Bayesian network perspective

    Full text link
    In this article, we derive a new stepsize adaptation for the normalized least mean square algorithm (NLMS) by describing the task of linear acoustic echo cancellation from a Bayesian network perspective. Similar to the well-known Kalman filter equations, we model the acoustic wave propagation from the loudspeaker to the microphone by a latent state vector and define a linear observation equation (to model the relation between the state vector and the observation) as well as a linear process equation (to model the temporal progress of the state vector). Based on additional assumptions on the statistics of the random variables in observation and process equation, we apply the expectation-maximization (EM) algorithm to derive an NLMS-like filter adaptation. By exploiting the conditional independence rules for Bayesian networks, we reveal that the resulting EM-NLMS algorithm has a stepsize update equivalent to the optimal-stepsize calculation proposed by Yamamoto and Kitayama in 1982, which has been adopted in many textbooks. As main difference, the instantaneous stepsize value is estimated in the M step of the EM algorithm (instead of being approximated by artificially extending the acoustic echo path). The EM-NLMS algorithm is experimentally verified for synthesized scenarios with both, white noise and male speech as input signal.Comment: 4 pages, 1 page of reference

    A New Variable Regularized Transform Domain NLMS Adaptive Filtering Algorithm-Acoustic Applications and Performance Analysis

    Get PDF
    published_or_final_versio

    Variable Regularized Fast Affine Projections

    Get PDF
    This paper introduces a variable regularization method for the fast affine projection algorithm (VR-FAP). It is inspired by a recently introduced technique for variable regularization of the classical, affine projection algorithm (VR-APA). In both algorithms, the regularization parameter varies as a function of the excitation, measurement noise, and residual error energies. Because of the dependence on the last parameter, VR-APA and VR-FAP demonstrate the desirable property of fast convergence (via a small regularization value) when the convergence is poor and deep convergence/immunity to measurement noise (via a large regularization value) when the convergence is good. While the regularization parameter of APA is explicitly available for on-line modification, FAP\u27s regularization is only set at initialization. To overcome this problem we use noise-injection with the noise-power proportional to the variable regularization parameter. As with their fixed regularization versions, VR-FAP is considerably less complex than VR-APA and simulations verify that they have the very similar convergence propertie
    corecore