13,313 research outputs found

    Stochastic Behavior of the Nonnegative Least Mean Fourth Algorithm for Stationary Gaussian Inputs and Slow Learning

    Full text link
    Some system identification problems impose nonnegativity constraints on the parameters to estimate due to inherent physical characteristics of the unknown system. The nonnegative least-mean-square (NNLMS) algorithm and its variants allow to address this problem in an online manner. A nonnegative least mean fourth (NNLMF) algorithm has been recently proposed to improve the performance of these algorithms in cases where the measurement noise is not Gaussian. This paper provides a first theoretical analysis of the stochastic behavior of the NNLMF algorithm for stationary Gaussian inputs and slow learning. Simulation results illustrate the accuracy of the proposed analysis.Comment: 11 pages, 8 figures, submitted for publicatio

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin

    Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm

    Get PDF
    The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application

    Time-varying signal processing using multi-wavelet basis functions and a modified block least mean square algorithm

    Get PDF
    This paper introduces a novel parametric modeling and identification method for linear time-varying systems using a modified block least mean square (LMS) approach where the time-varying parameters are approximated using multi-wavelet basis functions. This approach can be used to track rapidly or even sharply varying processes and is more suitable for recursive estimation of process parameters by combining wavelet approximation theory with a modified block LMS algorithm. Numerical examples are provided to show the effectiveness of the proposed method for dealing with severely nonstatinoary processes

    Maximum Entropy Vector Kernels for MIMO system identification

    Full text link
    Recent contributions have framed linear system identification as a nonparametric regularized inverse problem. Relying on ℓ2\ell_2-type regularization which accounts for the stability and smoothness of the impulse response to be estimated, these approaches have been shown to be competitive w.r.t classical parametric methods. In this paper, adopting Maximum Entropy arguments, we derive a new ℓ2\ell_2 penalty deriving from a vector-valued kernel; to do so we exploit the structure of the Hankel matrix, thus controlling at the same time complexity, measured by the McMillan degree, stability and smoothness of the identified models. As a special case we recover the nuclear norm penalty on the squared block Hankel matrix. In contrast with previous literature on reweighted nuclear norm penalties, our kernel is described by a small number of hyper-parameters, which are iteratively updated through marginal likelihood maximization; constraining the structure of the kernel acts as a (hyper)regularizer which helps controlling the effective degrees of freedom of our estimator. To optimize the marginal likelihood we adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be significantly computationally cheaper than other first and second order off-the-shelf optimization methods. The paper also contains an extensive comparison with many state-of-the-art methods on several Monte-Carlo studies, which confirms the effectiveness of our procedure

    Analysis and Evaluation of the Family of Sign Adaptive Algorithms

    Get PDF
    In this thesis, four novel sign adaptive algorithms proposed by the author were analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Fourth (SRLMF), Sign Regressor Least Mean Mixed-Norm (SRLMMN), Normalized Sign Regressor Least Mean Fourth (NSRLMF), and Normalized Sign Regressor Least Mean Mixed-Norm (NSRLMMN). The performance of the latter three algorithms has been analyzed and evaluated for real-valued data only. While the performance of the SRLMF algorithm has been analyzed and evaluated for both cases of real- and complex-valued data. Additionally, four sign adaptive algorithms proposed by other researchers were also analyzed and evaluated for floating-point arithmetic operations. These four algorithms include Sign Regressor Least Mean Square (SRLMS), Sign-Sign Least Mean Square (SSLMS), Normalized Sign-Error Least Mean Square (NSLMS), and Normalized Sign Regressor Least Mean Square (NSRLMS). The performance of the latter three algorithms has been analyzed and evaluated for both cases of real- and complex-valued data. While the performance of the SRLMS algorithm has been analyzed and evaluated for complex-valued data only. The framework employed in this thesis relies on energy conservation approach. The energy conservation framework has been applied uniformly for the evaluation of the performance of the aforementioned eight sign adaptive algorithms proposed by the author and other researchers. In other words, the energy conservation framework stands out as a common theme that runs throughout the treatment of the performance of the aforementioned eight algorithms. Some of the results from the performance evaluation of the four novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, NSRLMF, and NSRLMMN are as follows. It was shown that the convergence performance of the SRLMF and SRLMMN algorithms for real-valued data was similar to those of the Least Mean Fourth (LMF) and Least Mean Mixed-Norm (LMMN) algorithms, respectively. Moreover, it was also shown that the NSRLMF and NSRLMMN algorithms exhibit a compromised convergence performance for realvalued data as compared to the Normalized Least Mean Fourth (NLMF) and Normalized Least Mean Mixed-Norm (NLMMN) algorithms, respectively. Some misconceptions among biomedical signal processing researchers concerning the implementation of adaptive noise cancelers using the Sign-Error Least Mean Fourth (SLMF), Sign-Sign Least Mean Fourth (SSLMF), and their variant algorithms were also removed. Finally, three of the novel sign adaptive algorithms proposed by the author, namely SRLMF, SRLMMN, and NSRLMF have been successfully employed by other researchers and the author in applications ranging from power quality improvement in the distribution system and multiple artifacts removal from various physiological signals such as ElectroCardioGram (ECG) and ElectroEncephaloGram (EEG)

    Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm

    Full text link
    As one of the recently proposed algorithms for sparse system identification, l0l_0 norm constraint Least Mean Square (l0l_0-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of l0l_0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of l0l_0-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between l0l_0-LMS and some previous arts and the sufficient conditions for l0l_0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure

    Robust adaptive filtering algorithms for system identification and array signal processing in non-Gaussian environment

    Get PDF
    This dissertation proposes four new algorithms based on fractionally lower order statistics for adaptive filtering in a non-Gaussian interference environment. One is the affine projection sign algorithm (APSA) based on L₁ norm minimization, which combines the ability of decorrelating colored input and suppressing divergence when an outlier occurs. The second one is the variable-step-size normalized sign algorithm (VSS-NSA), which adjusts its step size automatically by matching the L₁ norm of the a posteriori error to that of noise. The third one adopts the same variable-step-size scheme but extends L₁ minimization to Lp minimization and the variable step-size normalized fractionally lower-order moment (VSS-NFLOM) algorithms are generalized. Instead of variable step size, the variable order is another trial to facilitate adaptive algorithms where no a priori statistics are available, which leads to the variable-order least mean pth norm (VO-LMP) algorithm, as the fourth one. These algorithms are applied to system identification for impulsive interference suppression, echo cancelation, and noise reduction. They are also applied to a phased array radar system with space-time adaptive processing (beamforming) to combat heavy-tailed non-Gaussian clutters. The proposed algorithms are tested by extensive computer simulations. The results demonstrate significant performance improvements in terms of convergence rate, steady-state error, computational simplicity, and robustness against impulsive noise and interference --Abstract, page iv
    • 

    corecore