6,939 research outputs found
A Unifying Approach to Quaternion Adaptive Filtering: Addressing the Gradient and Convergence
A novel framework for a unifying treatment of quaternion valued adaptive
filtering algorithms is introduced. This is achieved based on a rigorous
account of quaternion differentiability, the proposed I-gradient, and the use
of augmented quaternion statistics to account for real world data with
noncircular probability distributions. We first provide an elegant solution for
the calculation of the gradient of real functions of quaternion variables
(typical cost function), an issue that has so far prevented systematic
development of quaternion adaptive filters. This makes it possible to unify the
class of existing and proposed quaternion least mean square (QLMS) algorithms,
and to illuminate their structural similarity. Next, in order to cater for both
circular and noncircular data, the class of widely linear QLMS (WL-QLMS)
algorithms is introduced and the subsequent convergence analysis unifies the
treatment of strictly linear and widely linear filters, for both proper and
improper sources. It is also shown that the proposed class of HR gradients
allows us to resolve the uncertainty owing to the noncommutativity of
quaternion products, while the involution gradient (I-gradient) provides
generic extensions of the corresponding real- and complex-valued adaptive
algorithms, at a reduced computational cost. Simulations in both the strictly
linear and widely linear setting support the approach
Low-complexity RLS algorithms using dichotomous coordinate descent iterations
In this paper, we derive low-complexity recursive least squares (RLS) adaptive filtering algorithms. We express the RLS problem in terms of auxiliary normal equations with respect to increments of the filter weights and apply this approach to the exponentially weighted and sliding window cases to derive new RLS techniques. For solving the auxiliary equations, line search methods are used. We first consider conjugate gradient iterations with a complexity of O(N-2) operations per sample; N being the number of the filter weights. To reduce the complexity and make the algorithms more suitable for finite precision implementation, we propose a new dichotomous coordinate descent (DCD) algorithm and apply it to the auxiliary equations. This results in a transversal RLS adaptive filter with complexity as low as 3N multiplications per sample, which is only slightly higher than the complexity of the least mean squares (LMS) algorithm (2N multiplications). Simulations are used to compare the performance of the proposed algorithms against the classical RLS and known advanced adaptive algorithms. Fixed-point FPGA implementation of the proposed DCD-based RLS algorithm is also discussed and results of such implementation are presented
Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS
Over the last decade, kernel methods for nonlinear processing have
successfully been used in the machine learning community. The primary
mathematical tool employed in these methods is the notion of the Reproducing
Kernel Hilbert Space. However, so far, the emphasis has been on batch
techniques. It is only recently, that online techniques have been considered in
the context of adaptive signal processing tasks. Moreover, these efforts have
only been focussed on real valued data sequences. To the best of our knowledge,
no adaptive kernel-based strategy has been developed, so far, for complex
valued signals. Furthermore, although the real reproducing kernels are used in
an increasing number of machine learning problems, complex kernels have not,
yet, been used, in spite of their potential interest in applications that deal
with complex signals, with Communications being a typical example. In this
paper, we present a general framework to attack the problem of adaptive
filtering of complex signals, using either real reproducing kernels, taking
advantage of a technique called \textit{complexification} of real RKHSs, or
complex reproducing kernels, highlighting the use of the complex gaussian
kernel. In order to derive gradients of operators that need to be defined on
the associated complex RKHSs, we employ the powerful tool of Wirtinger's
Calculus, which has recently attracted attention in the signal processing
community. To this end, in this paper, the notion of Wirtinger's calculus is
extended, for the first time, to include complex RKHSs and use it to derive
several realizations of the Complex Kernel Least-Mean-Square (CKLMS) algorithm.
Experiments verify that the CKLMS offers significant performance improvements
over several linear and nonlinear algorithms, when dealing with nonlinearities.Comment: 15 pages (double column), preprint of article accepted in IEEE Trans.
Sig. Pro
- …