7 research outputs found
Chandrasekhar-based maximum correntropy Kalman filtering with the adaptive kernel size selection
This technical note is aimed to derive the Chandrasekhar-type recursion for
the maximum correntropy criterion (MCC) Kalman filtering (KF). For the
classical KF, the first Chandrasekhar difference equation was proposed at the
beginning of 1970s. This is the alternative to the traditionally used Riccati
recursion and it yields the so-called fast implementations known as the
Morf-Sidhu-Kailath-Sayed KF algorithms. They are proved to be computationally
cheap because of propagating the matrices of a smaller size than
error covariance matrix in the Riccati recursion. The problem of deriving the
Chandrasekhar-type recursion within the MCC estimation methodology has never
been raised yet in engineering literature. In this technical note, we do the
first step and derive the Chandrasekhar MCC-KF estimators for the case of
adaptive kernel size selection strategy, which implies a constant scalar
adjusting weight. Numerical examples substantiate a practical feasibility of
the newly suggested MCC-KF implementations and correctness of the presented
theoretical derivations
One-step condensed forms for square-root maximum correntropy criterion Kalman filtering
This paper suggests a few novel Cholesky-based square-root algorithms for the
maximum correntropy criterion Kalman filtering. In contrast to the previously
obtained results, new algorithms are developed in the so-called {\it condensed}
form that corresponds to the {\it a priori} filtering. Square-root filter
implementations are known to possess a better conditioning and improved
numerical robustness when solving ill-conditioned estimation problems.
Additionally, the new algorithms permit easier propagation of the state
estimate and do not require a back-substitution for computing the estimate.
Performance of novel filtering methods is examined by using a fourth order
benchmark navigation system example
Generalized Multi-kernel Maximum Correntropy Kalman Filter for Disturbance Estimation
Disturbance observers have been attracting continuing research efforts and
are widely used in many applications. Among them, the Kalman filter-based
disturbance observer is an attractive one since it estimates both the state and
the disturbance simultaneously, and is optimal for a linear system with
Gaussian noises. Unfortunately, The noise in the disturbance channel typically
exhibits a heavy-tailed distribution because the nominal disturbance dynamics
usually do not align with the practical ones. To handle this issue, we propose
a generalized multi-kernel maximum correntropy Kalman filter for disturbance
estimation, which is less conservative by adopting different kernel bandwidths
for different channels and exhibits excellent performance both with and without
external disturbance. The convergence of the fixed point iteration and the
complexity of the proposed algorithm are given. Simulations on a robotic
manipulator reveal that the proposed algorithm is very efficient in disturbance
estimation with moderate algorithm complexity.Comment: in IEEE Transactions on Automatic Control (2023
Cubature Kalman filter Based on generalized minimum error entropy with fiducial point
In real applications, non-Gaussian distributions are frequently caused by
outliers and impulsive disturbances, and these will impair the performance of
the classical cubature Kalman filter (CKF) algorithm. In this letter, a
modified generalized minimum error entropy criterion with fiducial point
(GMEEFP) is studied to ensure that the error comes together to around zero, and
a new CKF algorithm based on the GMEEFP criterion, called GMEEFP-CKF algorithm,
is developed. To demonstrate the practicality of the GMEEFP-CKF algorithm,
several simulations are performed, and it is demonstrated that the proposed
GMEEFP-CKF algorithm outperforms the existing CKF algorithms with impulse
noise
Multi-kernel Correntropy Regression: Robustness, Optimality, and Application on Magnetometer Calibration
This paper investigates the robustness and optimality of the multi-kernel
correntropy (MKC) on linear regression. We first derive an upper error bound
for a scalar regression problem in the presence of arbitrarily large outliers
and reveal that the kernel bandwidth should be neither too small nor too big in
the sense of the lowest upper error bound. Meanwhile, we find that the proposed
MKC is related to a specific heavy-tail distribution, and the level of the
heavy tail is controlled by the kernel bandwidth solely. Interestingly, this
distribution becomes the Gaussian distribution when the bandwidth is set to be
infinite, which allows one to tackle both Gaussian and non-Gaussian problems.
We propose an expectation-maximization (EM) algorithm to estimate the parameter
vectors and explore the kernel bandwidths alternatively. The results show that
our algorithm is equivalent to the traditional linear regression under Gaussian
noise and outperforms the conventional method under heavy-tailed noise. Both
numerical simulations and experiments on a magnetometer calibration application
verify the effectiveness of the proposed method
Recommended from our members
Maximum Correntropy Filtering for Complex Networks With Uncertain Dynamical Bias: Enabling Componentwise Event-Triggered Transmission
10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 62203016, U2241214, T2121002 and 61933007);;
10.13039/501100002858-China Postdoctoral Science Foundation (Grant Number: 2021TQ0009);
Royal Society, U (Grant Number: 0000DONOTUSETHIS0000.K);
Alexander von Humboldt Foundation of Germany
A Kogbetliantz-type algorithm for the hyperbolic SVD
In this paper a two-sided, parallel Kogbetliantz-type algorithm for the
hyperbolic singular value decomposition (HSVD) of real and complex square
matrices is developed, with a single assumption that the input matrix, of order
, admits such a decomposition into the product of a unitary, a non-negative
diagonal, and a -unitary matrix, where is a given diagonal matrix of
positive and negative signs. When , the proposed algorithm computes
the ordinary SVD. The paper's most important contribution -- a derivation of
formulas for the HSVD of matrices -- is presented first, followed
by the details of their implementation in floating-point arithmetic. Next, the
effects of the hyperbolic transformations on the columns of the iteration
matrix are discussed. These effects then guide a redesign of the dynamic pivot
ordering, being already a well-established pivot strategy for the ordinary
Kogbetliantz algorithm, for the general, HSVD. A heuristic but
sound convergence criterion is then proposed, which contributes to high
accuracy demonstrated in the numerical testing results. Such a -Kogbetliantz
algorithm as presented here is intrinsically slow, but is nevertheless usable
for matrices of small orders.Comment: a heavily revised version with 32 pages and 4 figure