1,389 research outputs found

    Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs

    Get PDF
    This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature

    Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization with Shrinkage

    Full text link
    This letter proposes a novel sparsity-aware adaptive filtering scheme and algorithms based on an alternating optimization strategy with shrinkage. The proposed scheme employs a two-stage structure that consists of an alternating optimization of a diagonally-structured matrix that speeds up the convergence and an adaptive filter with a shrinkage function that forces the coefficients with small magnitudes to zero. We devise alternating optimization least-mean square (LMS) algorithms for the proposed scheme and analyze its mean-square error. Simulations for a system identification application show that the proposed scheme and algorithms outperform in convergence and tracking existing sparsity-aware algorithms.Comment: 10 pages, 3 figures. IEEE Signal Processing Letters, 201

    Acoustic Echo and Noise Cancellation System for Hand-Free Telecommunication using Variable Step Size Algorithms

    Get PDF
    In this paper, acoustic echo cancellation with doubletalk detection system is implemented for a hand-free telecommunication system using Matlab. Here adaptive noise canceller with blind source separation (ANC-BSS) system is proposed to remove both background noise and far-end speaker echo signal in presence of double-talk. During the absence of double-talk, far-end speaker echo signal is cancelled by adaptive echo canceller. Both adaptive noise canceller and adaptive echo canceller are implemented using LMS, NLMS, VSLMS and VSNLMS algorithms. The normalized cross-correlation method is used for double-talk detection. VSNLMS has shown its superiority over all other algorithms both for double-talk and in absence of double-talk. During the absence of double-talk it shows its superiority in terms of increment in ERLE and decrement in misalignment. In presence of double-talk, it shows improvement in SNR of near-end speaker signal

    Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm

    Full text link
    As one of the recently proposed algorithms for sparse system identification, l0l_0 norm constraint Least Mean Square (l0l_0-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of l0l_0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of l0l_0-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between l0l_0-LMS and some previous arts and the sufficient conditions for l0l_0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure

    Adaptive Mixture Methods Based on Bregman Divergences

    Get PDF
    We investigate adaptive mixture methods that linearly combine outputs of mm constituent filters running in parallel to model a desired signal. We use "Bregman divergences" and obtain certain multiplicative updates to train the linear combination weights under an affine constraint or without any constraints. We use unnormalized relative entropy and relative entropy to define two different Bregman divergences that produce an unnormalized exponentiated gradient update and a normalized exponentiated gradient update on the mixture weights, respectively. We then carry out the mean and the mean-square transient analysis of these adaptive algorithms when they are used to combine outputs of mm constituent filters. We illustrate the accuracy of our results and demonstrate the effectiveness of these updates for sparse mixture systems.Comment: Submitted to Digital Signal Processing, Elsevier; IEEE.or
    corecore