56,808 research outputs found

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin

    The Krylov-proportionate normalized least mean fourth approach: Formulation and performance analysis

    Get PDF
    Cataloged from PDF version of article.We propose novel adaptive filtering algorithms based on the mean-fourth error objective while providing further improvements on the convergence performance through proportionate update. We exploit the sparsity of the system in the mean-fourth error framework through the proportionate normalized least mean fourth (PNLMF) algorithm. In order to broaden the applicability of the PNLMF algorithm to dispersive (non-sparse) systems, we introduce the Krylov-proportionate normalized least mean fourth (KPNLMF) algorithm using the Krylov subspace projection technique. We propose the Krylov-proportionate normalized least mean mixed norm (KPNLMMN) algorithm combining the mean-square and mean-fourth error objectives in order to enhance the performance of the constituent filters. Additionally, we propose the stable-PNLMF and stable-KPNLMF algorithms overcoming the stability issues induced due to the usage of the mean fourth error framework. Finally, we provide a complete performance analysis, i.e., the transient and the steady-state analyses, for the proportionate update based algorithms, e.g., the PNLMF, the KPNLMF algorithms and their variants; and analyze their tracking performance in a non-stationary environment. Through the numerical examples, we demonstrate the match of the theoretical and ensemble averaged results and show the superior performance of the introduced algorithms in different scenarios. (C) 2014 Elsevier B.V. All rights reserved

    Stochastic Behavior of the Nonnegative Least Mean Fourth Algorithm for Stationary Gaussian Inputs and Slow Learning

    Full text link
    Some system identification problems impose nonnegativity constraints on the parameters to estimate due to inherent physical characteristics of the unknown system. The nonnegative least-mean-square (NNLMS) algorithm and its variants allow to address this problem in an online manner. A nonnegative least mean fourth (NNLMF) algorithm has been recently proposed to improve the performance of these algorithms in cases where the measurement noise is not Gaussian. This paper provides a first theoretical analysis of the stochastic behavior of the NNLMF algorithm for stationary Gaussian inputs and slow learning. Simulation results illustrate the accuracy of the proposed analysis.Comment: 11 pages, 8 figures, submitted for publicatio

    Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm

    Get PDF
    The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application
    • 

    corecore