749 research outputs found

    Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm

    Full text link
    As one of the recently proposed algorithms for sparse system identification, l0l_0 norm constraint Least Mean Square (l0l_0-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of l0l_0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of l0l_0-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between l0l_0-LMS and some previous arts and the sufficient conditions for l0l_0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure

    Convergence behavior of NLMS algorithm for Gaussian inputs: Solutions using generalized Abelian integral functions and step size selection

    Get PDF
    This paper studies the mean and mean square convergence behaviors of the normalized least mean square (NLMS) algorithm with Gaussian inputs and additive white Gaussian noise. Using the Price's theorem and the framework proposed by Bershad in IEEE Transactions on Acoustics, Speech, and Signal Processing (1986, 1987), new expressions for the excess mean square error, stability bound and decoupled difference equations describing the mean and mean square convergence behaviors of the NLMS algorithm using the generalized Abelian integral functions are derived. These new expressions which closely resemble those of the LMS algorithm allow us to interpret the convergence performance of the NLMS algorithm in Gaussian environment. The theoretical analysis is in good agreement with the computer simulation results and it also gives new insight into step size selection. © 2009 Springer Science+Business Media, LLC.published_or_final_versionSpringer Open Choice, 01 Dec 201

    New sequential partial-update least mean M-estimate algorithms for robust adaptive system identification in impulsive noise

    Get PDF
    The sequential partial-update least mean square (S-LMS)-based algorithms are efficient methods for reducing the arithmetic complexity in adaptive system identification and other industrial informatics applications. They are also attractive in acoustic applications where long impulse responses are encountered. A limitation of these algorithms is their degraded performances in an impulsive noise environment. This paper proposes new robust counterparts for the S-LMS family based on M-estimation. The proposed sequential least mean M-estimate (S-LMM) family of algorithms employ nonlinearity to improve their robustness to impulsive noise. Another contribution of this paper is the presentation of a convergence performance analysis for the S-LMS/S-LMM family for Gaussian inputs and additive Gaussian or contaminated Gaussian noises. The analysis is important for engineers to understand the behaviors of these algorithms and to select appropriate parameters for practical realizations. The theoretical analyses reveal the advantages of input normalization and the M-estimation in combating impulsive noise. Computer simulations on system identification and joint active noise and acoustic echo cancellations in automobiles with double-talk are conducted to verify the theoretical results and the effectiveness of the proposed algorithms. © 2010 IEEE.published_or_final_versio

    On the performance analysis of the least mean M-estimate and normalized least mean M-estimate algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises

    Get PDF
    This paper studies the convergence analysis of the least mean M-estimate (LMM) and normalized least mean M-estimate (NLMM) algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises. These algorithms are based on the M-estimate cost function and employ error nonlinearity to achieve improved robustness in impulsive noise environment over their conventional LMS and NLMS counterparts. Using the Price's theorem and an extension of the method proposed in Bershad (IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-34(4), 793-806, 1986; IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(5), 636-644, 1987), we first derive new expressions of the decoupled difference equations which describe the mean and mean square convergence behaviors of these algorithms for Gaussian inputs and additive Gaussian noise. These new expressions, which are expressed in terms of the generalized Abelian integral functions, closely resemble those for the LMS algorithm and allow us to interpret the convergence performance and determine the step size stability bound of the studied algorithms. Next, using an extension of the Price's theorem for Gaussian mixture, similar results are obtained for additive contaminated Gaussian noise case. The theoretical analysis and the practical advantages of the LMM/NLMM algorithms are verified through computer simulations. © 2009 Springer Science+Business Media, LLC.published_or_final_versionSpringer Open Choice, 01 Dec 201

    Robust adaptive filtering algorithms for system identification and array signal processing in non-Gaussian environment

    Get PDF
    This dissertation proposes four new algorithms based on fractionally lower order statistics for adaptive filtering in a non-Gaussian interference environment. One is the affine projection sign algorithm (APSA) based on L₁ norm minimization, which combines the ability of decorrelating colored input and suppressing divergence when an outlier occurs. The second one is the variable-step-size normalized sign algorithm (VSS-NSA), which adjusts its step size automatically by matching the L₁ norm of the a posteriori error to that of noise. The third one adopts the same variable-step-size scheme but extends L₁ minimization to Lp minimization and the variable step-size normalized fractionally lower-order moment (VSS-NFLOM) algorithms are generalized. Instead of variable step size, the variable order is another trial to facilitate adaptive algorithms where no a priori statistics are available, which leads to the variable-order least mean pth norm (VO-LMP) algorithm, as the fourth one. These algorithms are applied to system identification for impulsive interference suppression, echo cancelation, and noise reduction. They are also applied to a phased array radar system with space-time adaptive processing (beamforming) to combat heavy-tailed non-Gaussian clutters. The proposed algorithms are tested by extensive computer simulations. The results demonstrate significant performance improvements in terms of convergence rate, steady-state error, computational simplicity, and robustness against impulsive noise and interference --Abstract, page iv

    Analysis and Evaluation of the Family of Sign Adaptive Algorithms

    Get PDF
    In this thesis, four novel sign adaptive algorithms proposed by the author were analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Fourth (SRLMF), Sign Regressor Least Mean Mixed-Norm (SRLMMN), Normalized Sign Regressor Least Mean Fourth (NSRLMF), and Normalized Sign Regressor Least Mean Mixed-Norm (NSRLMMN). The performance of the latter three algorithms has been analyzed and evaluated for real-valued data only. While the performance of the SRLMF algorithm has been analyzed and evaluated for both cases of real- and complex-valued data. Additionally, four sign adaptive algorithms proposed by other researchers were also analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Square (SRLMS), Sign-Sign Least Mean Square (SSLMS), Normalized Sign-Error Least Mean Square (NSLMS), and Normalized Sign Regressor Least Mean Square (NSRLMS). The performance of the latter three algorithms has been analyzed and evaluated for both cases of real- and complex-valued data. While the performance of the SRLMS algorithm has been analyzed and evaluated for complex-valued data only. The framework employed in this thesis relies on energy conservation approach. The energy conservation framework has been applied uniformly for the evaluation of the performance of the aforementioned eight sign adaptive algorithms proposed by the author and other researchers. In other words, the energy conservation framework stands out as a common theme that runs throughout the treatment of the performance of the aforementioned eight algorithms. Some of the results from the performance evaluation of the four novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, NSRLMF, and NSRLMMN are as follows. It was shown that the convergence performance of the SRLMF and SRLMMN algorithms for real-valued data was similar to those of the Least Mean Fourth (LMF) and Least Mean Mixed-Norm (LMMN) algorithms, respectively. Moreover, it was also shown that the NSRLMF and NSRLMMN algorithms exhibit a compromised convergence performance for realvalued data as compared to the Normalized Least Mean Fourth (NLMF) and Normalized Least Mean Mixed-Norm (NLMMN) algorithms, respectively. Some misconceptions among biomedical signal processing researchers concerning the implementation of adaptive noise cancelers using the Sign-Error Least Mean Fourth (SLMF), Sign-Sign Least Mean Fourth (SSLMF), and their variant algorithms were also removed. Finally, three of the novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, and NSRLMF have been successfully employed by other researchers and the author in applications ranging from power quality improvement in the distribution system and multiple artifacts removal from various physiological signals such as ElectroCardioGram (ECG) and ElectroEncephaloGram (EEG)
    corecore