2,009 research outputs found

    Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm

    Full text link
    As one of the recently proposed algorithms for sparse system identification, l0l_0 norm constraint Least Mean Square (l0l_0-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of l0l_0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of l0l_0-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between l0l_0-LMS and some previous arts and the sufficient conditions for l0l_0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure

    New sequential partial-update least mean M-estimate algorithms for robust adaptive system identification in impulsive noise

    Get PDF
    The sequential partial-update least mean square (S-LMS)-based algorithms are efficient methods for reducing the arithmetic complexity in adaptive system identification and other industrial informatics applications. They are also attractive in acoustic applications where long impulse responses are encountered. A limitation of these algorithms is their degraded performances in an impulsive noise environment. This paper proposes new robust counterparts for the S-LMS family based on M-estimation. The proposed sequential least mean M-estimate (S-LMM) family of algorithms employ nonlinearity to improve their robustness to impulsive noise. Another contribution of this paper is the presentation of a convergence performance analysis for the S-LMS/S-LMM family for Gaussian inputs and additive Gaussian or contaminated Gaussian noises. The analysis is important for engineers to understand the behaviors of these algorithms and to select appropriate parameters for practical realizations. The theoretical analyses reveal the advantages of input normalization and the M-estimation in combating impulsive noise. Computer simulations on system identification and joint active noise and acoustic echo cancellations in automobiles with double-talk are conducted to verify the theoretical results and the effectiveness of the proposed algorithms. © 2010 IEEE.published_or_final_versio

    A New Variable Regularized Transform Domain NLMS Adaptive Filtering Algorithm-Acoustic Applications and Performance Analysis

    Get PDF
    published_or_final_versio

    On the performance analysis of the least mean M-estimate and normalized least mean M-estimate algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises

    Get PDF
    This paper studies the convergence analysis of the least mean M-estimate (LMM) and normalized least mean M-estimate (NLMM) algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises. These algorithms are based on the M-estimate cost function and employ error nonlinearity to achieve improved robustness in impulsive noise environment over their conventional LMS and NLMS counterparts. Using the Price's theorem and an extension of the method proposed in Bershad (IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-34(4), 793-806, 1986; IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(5), 636-644, 1987), we first derive new expressions of the decoupled difference equations which describe the mean and mean square convergence behaviors of these algorithms for Gaussian inputs and additive Gaussian noise. These new expressions, which are expressed in terms of the generalized Abelian integral functions, closely resemble those for the LMS algorithm and allow us to interpret the convergence performance and determine the step size stability bound of the studied algorithms. Next, using an extension of the Price's theorem for Gaussian mixture, similar results are obtained for additive contaminated Gaussian noise case. The theoretical analysis and the practical advantages of the LMM/NLMM algorithms are verified through computer simulations. © 2009 Springer Science+Business Media, LLC.published_or_final_versionSpringer Open Choice, 01 Dec 201

    Convergence behavior of NLMS algorithm for Gaussian inputs: Solutions using generalized Abelian integral functions and step size selection

    Get PDF
    This paper studies the mean and mean square convergence behaviors of the normalized least mean square (NLMS) algorithm with Gaussian inputs and additive white Gaussian noise. Using the Price's theorem and the framework proposed by Bershad in IEEE Transactions on Acoustics, Speech, and Signal Processing (1986, 1987), new expressions for the excess mean square error, stability bound and decoupled difference equations describing the mean and mean square convergence behaviors of the NLMS algorithm using the generalized Abelian integral functions are derived. These new expressions which closely resemble those of the LMS algorithm allow us to interpret the convergence performance of the NLMS algorithm in Gaussian environment. The theoretical analysis is in good agreement with the computer simulation results and it also gives new insight into step size selection. © 2009 Springer Science+Business Media, LLC.published_or_final_versionSpringer Open Choice, 01 Dec 201
    • …
    corecore