38 research outputs found
Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering
The robustness of the kernel recursive least square (KRLS) algorithm has
recently been improved by combining them with more robust information-theoretic
learning criteria, such as minimum error entropy (MEE) and generalized MEE
(GMEE), which also improves the computational complexity of the KRLS-type
algorithms to a certain extent. To reduce the computational load of the
KRLS-type algorithms, the quantized GMEE (QGMEE) criterion, in this paper, is
combined with the KRLS algorithm, and as a result two kinds of KRLS-type
algorithms, called quantized kernel recursive MEE (QKRMEE) and quantized kernel
recursive GMEE (QKRGMEE), are designed. As well, the mean error behavior, mean
square error behavior, and computational complexity of the proposed algorithms
are investigated. In addition, simulation and real experimental data are
utilized to verify the feasibility of the proposed algorithms
Generalized Minimum Error Entropy for Adaptive Filtering
Error entropy is a important nonlinear similarity measure, and it has
received increasing attention in many practical applications. The default
kernel function of error entropy criterion is Gaussian kernel function,
however, which is not always the best choice. In our study, a novel concept,
called generalized error entropy, utilizing the generalized Gaussian density
(GGD) function as the kernel function is proposed. We further derivate the
generalized minimum error entropy (GMEE) criterion, and a novel adaptive
filtering called GMEE algorithm is derived by utilizing GMEE criterion. The
stability, steady-state performance, and computational complexity of the
proposed algorithm are investigated. Some simulation indicate that the GMEE
algorithm performs well in Gaussian, sub-Gaussian, and super-Gaussian noises
environment, respectively. Finally, the GMEE algorithm is applied to acoustic
echo cancelation and performs well.Comment: 9 pages, 8 figure
Generalized Minimum Error with Fiducial Points Criterion for Robust Learning
The conventional Minimum Error Entropy criterion (MEE) has its limitations,
showing reduced sensitivity to error mean values and uncertainty regarding
error probability density function locations. To overcome this, a MEE with
fiducial points criterion (MEEF), was presented. However, the efficacy of the
MEEF is not consistent due to its reliance on a fixed Gaussian kernel. In this
paper, a generalized minimum error with fiducial points criterion (GMEEF) is
presented by adopting the Generalized Gaussian Density (GGD) function as
kernel. The GGD extends the Gaussian distribution by introducing a shape
parameter that provides more control over the tail behavior and peakedness. In
addition, due to the high computational complexity of GMEEF criterion, the
quantized idea is introduced to notably lower the computational load of the
GMEEF-type algorithm. Finally, the proposed criterions are introduced to the
domains of adaptive filter, kernel recursive algorithm, and multilayer
perceptron. Several numerical simulations, which contain system identification,
acoustic echo cancellation, times series prediction, and supervised
classification, indicate that the novel algorithms' performance performs
excellently.Comment: 12 pages, 9 figure
State Estimation of Wireless Sensor Networks in the Presence of Data Packet Drops and Non-Gaussian Noise
Distributed Kalman filter approaches based on the maximum correntropy
criterion have recently demonstrated superior state estimation performance to
that of conventional distributed Kalman filters for wireless sensor networks in
the presence of non-Gaussian impulsive noise. However, these algorithms
currently fail to take account of data packet drops. The present work addresses
this issue by proposing a distributed maximum correntropy Kalman filter that
accounts for data packet drops (i.e., the DMCKF-DPD algorithm). The
effectiveness and feasibility of the algorithm are verified by simulations
conducted in a wireless sensor network with intermittent observations due to
data packet drops under a non-Gaussian noise environment. Moreover, the
computational complexity of the DMCKF-DPD algorithm is demonstrated to be
moderate compared with that of a conventional distributed Kalman filter, and we
provide a sufficient condition to ensure the convergence of the proposed
algorithm
Distributed fusion filter over lossy wireless sensor networks with the presence of non-Gaussian noise
The information transmission between nodes in a wireless sensor networks
(WSNs) often causes packet loss due to denial-of-service (DoS) attack, energy
limitations, and environmental factors, and the information that is
successfully transmitted can also be contaminated by non-Gaussian noise. The
presence of these two factors poses a challenge for distributed state
estimation (DSE) over WSNs. In this paper, a generalized packet drop model is
proposed to describe the packet loss phenomenon caused by DoS attacks and other
factors. Moreover, a modified maximum correntropy Kalman filter is given, and
it is extended to distributed form (DM-MCKF). In addition, a distributed
modified maximum correntropy Kalman filter incorporating the generalized data
packet drop (DM-MCKF-DPD) algorithm is provided to implement DSE with the
presence of both non-Gaussian noise pollution and packet drop. A sufficient
condition to ensure the convergence of the fixed-point iterative process of the
DM-MCKF-DPD algorithm is presented and the computational complexity of the
DM-MCKF-DPD algorithm is analyzed. Finally, the effectiveness and feasibility
of the proposed algorithms are verified by simulations
Robust Sensor Fusion for Indoor Wireless Localization
Location knowledge in indoor environment using Indoor Positioning Systems
(IPS) has become very useful and popular in recent years. Indoor wireless
localization suffers from severe multi-path fading and non-line-of-sight
conditions. This paper presents a novel indoor localization framework based on
sensor fusion of Zigbee Wireless Sensor Networks (WSN) using Received Signal
Strength (RSS). The unknown position is equipped with two or more mobile nodes.
The range between two mobile nodes is fixed as priori. The attitude (roll,
pitch, and yaw) of the mobile node are measured by inertial sensors (ISs). Then
the angle and the range between any two nodes can be obtained, and thus the
path between the two nodes can be modeled as a curve. Through an efficient
cooperation between two or more mobile nodes, this framework effectively
exploits the RSS techniques. This constraint help improve the positioning
accuracy. Theoretical analysis on localization distortion and Monte Carlo
simulations shows that the proposed cooperative strategy of multiple nodes with
extended Kalman filter (EKF) achieves significantly higher positioning accuracy
than the existing systems, especially in heavily obstructed scenarios
A kernel-based embedding framework for high-dimensional data analysis
The world is essentially multidimensional, e.g., neurons, computer networks, Internet traffic, and financial markets. The challenge is to discover and extract information that lies hidden in these high-dimensional datasets to support classification, regression, clustering, and visualization tasks. As a result, dimensionality reduction aims to provide a faithful representation of data in a low-dimensional space. This removes noise and redundant features, which is useful to understand and visualize the structure of complex datasets. The focus of this work is the analysis of high-dimensional data to support regression tasks and exploratory data analysis in real-world scenarios. Firstly, we propose an online framework to predict longterm future behavior of time-series. Secondly, we propose a new dimensionality reduction method to preserve the significant structure of high-dimensional data in a low-dimensional space. Lastly, we propose an sparsification strategy based on dimensionality reduction to avoid overfitting and reduce computational complexity in online applicationsEl mundo es esencialmente multidimensional, por ejemplo, neuronas, redes computacionales, tr谩fico de internet y los mercados financieros. El desaf铆o es descubrir y extraer informaci贸n que permanece oculta en estos conjuntos de datos de alta dimensi贸n para apoyar tareas de clasificaci贸n, regresi贸n, agrupamiento y visualizaci贸n. Como resultado de ello, los m茅todos de reducci贸n de dimensi贸n pretenden suministrar una fiel representaci贸n de los datos en un espacio de baja dimensi贸n. Esto permite eliminar ruido y caracter铆sticas redundantes, lo que es 煤til para entender y visualizar la estructura de conjuntos de datos complejos. Este trabajo se enfoca en el an谩lisis de datos de alta dimensi贸n para apoyar tareas de regresi贸n y el an谩lisis exploratorio de datos en escenarios del mundo real. En primer lugar, proponemos un marco para la predicci贸n del comportamiento a largo plazo de series de tiempo. En segundo lugar, se propone un nuevo m茅todo de reducci贸n de dimensi贸n para preservar la estructura significativa de datos de alta dimensi贸n en un espacio de baja dimensi贸n. Finalmente, proponemos una estrategia de esparsificacion que utiliza reducci贸n de dimensional dad para evitar sobre ajuste y reducir la complejidad computacional de aplicaciones en l铆neaDoctorad