6,333 research outputs found
A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost
We introduce a novel family of adaptive filtering algorithms based on a
relative logarithmic cost. The new family intrinsically combines the higher and
lower order measures of the error into a single continuous update based on the
error amount. We introduce important members of this family of algorithms such
as the least mean logarithmic square (LMLS) and least logarithmic absolute
difference (LLAD) algorithms that improve the convergence performance of the
conventional algorithms. However, our approach and analysis are generic such
that they cover other well-known cost functions as described in the paper. The
LMLS algorithm achieves comparable convergence performance with the least mean
fourth (LMF) algorithm and extends the stability bound on the step size. The
LLAD and least mean square (LMS) algorithms demonstrate similar convergence
performance in impulse-free noise environments while the LLAD algorithm is
robust against impulsive interferences and outperforms the sign algorithm (SA).
We analyze the transient, steady state and tracking performance of the
introduced algorithms and demonstrate the match of the theoretical analyzes and
simulation results. We show the extended stability bound of the LMLS algorithm
and analyze the robustness of the LLAD algorithm against impulsive
interferences. Finally, we demonstrate the performance of our algorithms in
different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin
An affine combination of two LMS adaptive filters - Transient mean-square analysis
This paper studies the statistical behavior of an affine combination of the outputs of two LMS adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor is restricted to the interval . The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the MSE. First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSD's of either filter
A Unifying Approach to Quaternion Adaptive Filtering: Addressing the Gradient and Convergence
A novel framework for a unifying treatment of quaternion valued adaptive
filtering algorithms is introduced. This is achieved based on a rigorous
account of quaternion differentiability, the proposed I-gradient, and the use
of augmented quaternion statistics to account for real world data with
noncircular probability distributions. We first provide an elegant solution for
the calculation of the gradient of real functions of quaternion variables
(typical cost function), an issue that has so far prevented systematic
development of quaternion adaptive filters. This makes it possible to unify the
class of existing and proposed quaternion least mean square (QLMS) algorithms,
and to illuminate their structural similarity. Next, in order to cater for both
circular and noncircular data, the class of widely linear QLMS (WL-QLMS)
algorithms is introduced and the subsequent convergence analysis unifies the
treatment of strictly linear and widely linear filters, for both proper and
improper sources. It is also shown that the proposed class of HR gradients
allows us to resolve the uncertainty owing to the noncommutativity of
quaternion products, while the involution gradient (I-gradient) provides
generic extensions of the corresponding real- and complex-valued adaptive
algorithms, at a reduced computational cost. Simulations in both the strictly
linear and widely linear setting support the approach
- …