12,237 research outputs found
A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost
We introduce a novel family of adaptive filtering algorithms based on a
relative logarithmic cost. The new family intrinsically combines the higher and
lower order measures of the error into a single continuous update based on the
error amount. We introduce important members of this family of algorithms such
as the least mean logarithmic square (LMLS) and least logarithmic absolute
difference (LLAD) algorithms that improve the convergence performance of the
conventional algorithms. However, our approach and analysis are generic such
that they cover other well-known cost functions as described in the paper. The
LMLS algorithm achieves comparable convergence performance with the least mean
fourth (LMF) algorithm and extends the stability bound on the step size. The
LLAD and least mean square (LMS) algorithms demonstrate similar convergence
performance in impulse-free noise environments while the LLAD algorithm is
robust against impulsive interferences and outperforms the sign algorithm (SA).
We analyze the transient, steady state and tracking performance of the
introduced algorithms and demonstrate the match of the theoretical analyzes and
simulation results. We show the extended stability bound of the LMLS algorithm
and analyze the robustness of the LLAD algorithm against impulsive
interferences. Finally, we demonstrate the performance of our algorithms in
different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin
A Robust Zero-point Attraction LMS Algorithm on Near Sparse System Identification
The newly proposed norm constraint zero-point attraction Least Mean
Square algorithm (ZA-LMS) demonstrates excellent performance on exact sparse
system identification. However, ZA-LMS has less advantage against standard LMS
when the system is near sparse. Thus, in this paper, firstly the near sparse
system modeling by Generalized Gaussian Distribution is recommended, where the
sparsity is defined accordingly. Secondly, two modifications to the ZA-LMS
algorithm have been made. The norm penalty is replaced by a partial
norm in the cost function, enhancing robustness without increasing the
computational complexity. Moreover, the zero-point attraction item is weighted
by the magnitude of estimation error which adjusts the zero-point attraction
force dynamically. By combining the two improvements, Dynamic Windowing ZA-LMS
(DWZA-LMS) algorithm is further proposed, which shows better performance on
near sparse system identification. In addition, the mean square performance of
DWZA-LMS algorithm is analyzed. Finally, computer simulations demonstrate the
effectiveness of the proposed algorithm and verify the result of theoretical
analysis.Comment: 20 pages, 11 figure
Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm
As one of the recently proposed algorithms for sparse system identification,
norm constraint Least Mean Square (-LMS) algorithm modifies the cost
function of the traditional method with a penalty of tap-weight sparsity. The
performance of -LMS is quite attractive compared with its various
precursors. However, there has been no detailed study of its performance. This
paper presents all-around and throughout theoretical performance analysis of
-LMS for white Gaussian input data based on some reasonable assumptions.
Expressions for steady-state mean square deviation (MSD) are derived and
discussed with respect to algorithm parameters and system sparsity. The
parameter selection rule is established for achieving the best performance.
Approximated with Taylor series, the instantaneous behavior is also derived. In
addition, the relationship between -LMS and some previous arts and the
sufficient conditions for -LMS to accelerate convergence are set up.
Finally, all of the theoretical results are compared with simulations and are
shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure
Stochastic Behavior Analysis of the Gaussian Kernel Least-Mean-Square Algorithm
The kernel least-mean-square (KLMS) algorithm is a popular algorithm in nonlinear adaptive filtering due to its
simplicity and robustness. In kernel adaptive filters, the statistics of the input to the linear filter depends on the parameters of the kernel employed. Moreover, practical implementations require a finite nonlinearity model order. A Gaussian KLMS has two design parameters, the step size and the Gaussian kernel bandwidth. Thus, its design requires analytical models for the algorithm behavior as a function of these two parameters. This paper studies the steady-state behavior and the transient behavior of the
Gaussian KLMS algorithm for Gaussian inputs and a finite order nonlinearity model. In particular, we derive recursive expressions for the mean-weight-error vector and the mean-square-error. The model predictions show excellent agreement with Monte Carlo simulations in transient and steady state. This allows the explicit analytical determination of stability limits, and gives opportunity
to choose the algorithm parameters a priori in order to achieve prescribed convergence speed and quality of the estimate. Design examples are presented which validate the theoretical analysis and illustrates its application
Distributed Diffusion-Based LMS for Node-Specific Adaptive Parameter Estimation
A distributed adaptive algorithm is proposed to solve a node-specific
parameter estimation problem where nodes are interested in estimating
parameters of local interest, parameters of common interest to a subset of
nodes and parameters of global interest to the whole network. To address the
different node-specific parameter estimation problems, this novel algorithm
relies on a diffusion-based implementation of different Least Mean Squares
(LMS) algorithms, each associated with the estimation of a specific set of
local, common or global parameters. Coupled with the estimation of the
different sets of parameters, the implementation of each LMS algorithm is only
undertaken by the nodes of the network interested in a specific set of local,
common or global parameters. The study of convergence in the mean sense reveals
that the proposed algorithm is asymptotically unbiased. Moreover, a
spatial-temporal energy conservation relation is provided to evaluate the
steady-state performance at each node in the mean-square sense. Finally, the
theoretical results and the effectiveness of the proposed technique are
validated through computer simulations in the context of cooperative spectrum
sensing in Cognitive Radio networks.Comment: 13 pages, 6 figure
- âŠ