132 research outputs found
Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing
In hyperspectral images, some spectral bands suffer from low signal-to-noise
ratio due to noisy acquisition and atmospheric effects, thus requiring robust
techniques for the unmixing problem. This paper presents a robust supervised
spectral unmixing approach for hyperspectral images. The robustness is achieved
by writing the unmixing problem as the maximization of the correntropy
criterion subject to the most commonly used constraints. Two unmixing problems
are derived: the first problem considers the fully-constrained unmixing, with
both the non-negativity and sum-to-one constraints, while the second one deals
with the non-negativity and the sparsity-promoting of the abundances. The
corresponding optimization problems are solved efficiently using an alternating
direction method of multipliers (ADMM) approach. Experiments on synthetic and
real hyperspectral images validate the performance of the proposed algorithms
for different scenarios, demonstrating that the correntropy-based unmixing is
robust to outlier bands.Comment: 23 page
Robust Adaptive Generalized Correntropy-based Smoothed Graph Signal Recovery with a Kernel Width Learning
This paper proposes a robust adaptive algorithm for smooth graph signal
recovery which is based on generalized correntropy. A proper cost function is
defined for this purpose. The proposed algorithm is derived and a kernel width
learning-based version of the algorithm is suggested which the simulation
results show the superiority of it to the fixed correntropy kernel version of
the algorithm. Moreover, some theoretical analysis of the proposed algorithm
are provided. In this regard, firstly, the convexity analysis of the cost
function is discussed. Secondly, the uniform stability of the algorithm is
investigated. Thirdly, the mean convergence analysis is also added. Finally,
the complexity analysis of the algorithm is incorporated. In addition, some
synthetic and real-world experiments show the advantage of the proposed
algorithm in comparison to some other adaptive algorithms in the literature of
adaptive graph signal recovery
Graph Regularized Non-negative Matrix Factorization By Maximizing Correntropy
Non-negative matrix factorization (NMF) has proved effective in many
clustering and classification tasks. The classic ways to measure the errors
between the original and the reconstructed matrix are distance or
Kullback-Leibler (KL) divergence. However, nonlinear cases are not properly
handled when we use these error measures. As a consequence, alternative
measures based on nonlinear kernels, such as correntropy, are proposed.
However, the current correntropy-based NMF only targets on the low-level
features without considering the intrinsic geometrical distribution of data. In
this paper, we propose a new NMF algorithm that preserves local invariance by
adding graph regularization into the process of max-correntropy-based matrix
factorization. Meanwhile, each feature can learn corresponding kernel from the
data. The experiment results of Caltech101 and Caltech256 show the benefits of
such combination against other NMF algorithms for the unsupervised image
clustering
Physically inspired methods and development of data-driven predictive systems.
Traditionally building of predictive models is perceived as a combination of both science and art. Although the designer of a predictive system effectively follows a prescribed procedure, his domain knowledge as well as expertise and intuition in the field of machine learning are
often irreplaceable. However, in many practical situations it is possible to build wellâperforming predictive systems by following a rigorous methodology and offsetting not only the lack of domain knowledge but also partial lack of expertise and intuition, by computational power. The
generalised predictive model development cycle discussed in this thesis is an example of such methodology, which despite being computationally expensive, has been successfully applied to realâworld problems. The proposed predictive system design cycle is a purely dataâdriven approach. The quality of data used to build the system is thus of crucial importance. In practice however, the data is rarely perfect. Common problems include missing values, high dimensionality or very limited amount of labelled exemplars. In order to address these issues, this work investigated and exploited inspirations coming from physics. The novel use of wellâestablished physical models in the form of potential fields, has resulted in derivation of a comprehensive Electrostatic Field Classification
Framework for supervised and semiâsupervised learning from incomplete data. Although the computational power constantly becomes cheaper and more accessible, it is not
infinite. Therefore efficient techniques able to exploit finite amount of predictive information content of the data and limit the computational requirements of the resourceâhungry predictive system design procedure are very desirable. In designing such techniques this work once again
investigated and exploited inspirations coming from physics. By using an analogy with a set of interacting particles and the resulting Information Theoretic Learning framework, the Density Preserving Sampling technique has been derived. This technique acts as a computationally
efficient alternative for crossâvalidation, which fits well within the proposed methodology. All methods derived in this thesis have been thoroughly tested on a number of benchmark datasets. The proposed generalised predictive model design cycle has been successfully applied to two realâworld environmental problems, in which a comparative study of Density Preserving Sampling and crossâvalidation has also been performed confirming great potential of the proposed methods
Physically inspired methods and development of data-driven predictive systems
Traditionally building of predictive models is perceived as a combination of both science and art. Although the designer of a predictive system effectively follows a prescribed procedure, his domain knowledge as well as expertise and intuition in the field of machine learning are often irreplaceable. However, in many practical situations it is possible to build wellâperforming predictive systems by following a rigorous methodology and offsetting not only the lack of domain knowledge but also partial lack of expertise and intuition, by computational power. The generalised predictive model development cycle discussed in this thesis is an example of such methodology, which despite being computationally expensive, has been successfully applied to realâworld problems. The proposed predictive system design cycle is a purely dataâdriven approach. The quality of data used to build the system is thus of crucial importance. In practice however, the data is rarely perfect. Common problems include missing values, high dimensionality or very limited amount of labelled exemplars. In order to address these issues, this work investigated and exploited inspirations coming from physics. The novel use of wellâestablished physical models in the form of potential fields, has resulted in derivation of a comprehensive Electrostatic Field Classification Framework for supervised and semiâsupervised learning from incomplete data. Although the computational power constantly becomes cheaper and more accessible, it is not infinite. Therefore efficient techniques able to exploit finite amount of predictive information content of the data and limit the computational requirements of the resourceâhungry predictive system design procedure are very desirable. In designing such techniques this work once again investigated and exploited inspirations coming from physics. By using an analogy with a set of interacting particles and the resulting Information Theoretic Learning framework, the Density Preserving Sampling technique has been derived. This technique acts as a computationally efficient alternative for crossâvalidation, which fits well within the proposed methodology. All methods derived in this thesis have been thoroughly tested on a number of benchmark datasets. The proposed generalised predictive model design cycle has been successfully applied to two realâworld environmental problems, in which a comparative study of Density Preserving Sampling and crossâvalidation has also been performed confirming great potential of the proposed methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
- âŠ