4,785 research outputs found
Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing
In hyperspectral images, some spectral bands suffer from low signal-to-noise
ratio due to noisy acquisition and atmospheric effects, thus requiring robust
techniques for the unmixing problem. This paper presents a robust supervised
spectral unmixing approach for hyperspectral images. The robustness is achieved
by writing the unmixing problem as the maximization of the correntropy
criterion subject to the most commonly used constraints. Two unmixing problems
are derived: the first problem considers the fully-constrained unmixing, with
both the non-negativity and sum-to-one constraints, while the second one deals
with the non-negativity and the sparsity-promoting of the abundances. The
corresponding optimization problems are solved efficiently using an alternating
direction method of multipliers (ADMM) approach. Experiments on synthetic and
real hyperspectral images validate the performance of the proposed algorithms
for different scenarios, demonstrating that the correntropy-based unmixing is
robust to outlier bands.Comment: 23 page
Robust computation of linear models by convex relaxation
Consider a dataset of vector-valued observations that consists of noisy
inliers, which are explained well by a low-dimensional subspace, along with
some number of outliers. This work describes a convex optimization problem,
called REAPER, that can reliably fit a low-dimensional model to this type of
data. This approach parameterizes linear subspaces using orthogonal projectors,
and it uses a relaxation of the set of orthogonal projectors to reach the
convex formulation. The paper provides an efficient algorithm for solving the
REAPER problem, and it documents numerical experiments which confirm that
REAPER can dependably find linear structure in synthetic and natural data. In
addition, when the inliers lie near a low-dimensional subspace, there is a
rigorous theory that describes when REAPER can approximate this subspace.Comment: Formerly titled "Robust computation of linear models, or How to find
a needle in a haystack
An Efficient Alternating Riemannian/Projected Gradient Descent Ascent Algorithm for Fair Principal Component Analysis
Fair principal component analysis (FPCA), a ubiquitous dimensionality
reduction technique in signal processing and machine learning, aims to find a
low-dimensional representation for a high-dimensional dataset in view of
fairness. The FPCA problem involves optimizing a non-convex and non-smooth
function over the Stiefel manifold. The state-of-the-art methods for solving
the problem are subgradient methods and semidefinite relaxation-based methods.
However, these two types of methods have their obvious limitations and thus are
only suitable for efficiently solving the FPCA problem in special scenarios.
This paper aims at developing efficient algorithms for solving the FPCA problem
in general, especially large-scale, settings. In this paper, we first transform
FPCA into a smooth non-convex linear minimax optimization problem over the
Stiefel manifold. To solve the above general problem, we propose an efficient
alternating Riemannian/projected gradient descent ascent (ARPGDA) algorithm,
which performs a Riemannian gradient descent step and an ordinary projected
gradient ascent step at each iteration. We prove that ARPGDA can find an
-stationary point of the above problem within
iterations. Simulation results show that,
compared with the state-of-the-art methods, our proposed ARPGDA algorithm can
achieve a better performance in terms of solution quality and speed for solving
the FPCA problems.Comment: 5 pages, 8 figures, submitted for possible publicatio
Energy preserving model order reduction of the nonlinear Schr\"odinger equation
An energy preserving reduced order model is developed for two dimensional
nonlinear Schr\"odinger equation (NLSE) with plane wave solutions and with an
external potential. The NLSE is discretized in space by the symmetric interior
penalty discontinuous Galerkin (SIPG) method. The resulting system of
Hamiltonian ordinary differential equations are integrated in time by the
energy preserving average vector field (AVF) method. The mass and energy
preserving reduced order model (ROM) is constructed by proper orthogonal
decomposition (POD) Galerkin projection. The nonlinearities are computed for
the ROM efficiently by discrete empirical interpolation method (DEIM) and
dynamic mode decomposition (DMD). Preservation of the semi-discrete energy and
mass are shown for the full order model (FOM) and for the ROM which ensures the
long term stability of the solutions. Numerical simulations illustrate the
preservation of the energy and mass in the reduced order model for the two
dimensional NLSE with and without the external potential. The POD-DMD makes a
remarkable improvement in computational speed-up over the POD-DEIM. Both
methods approximate accurately the FOM, whereas POD-DEIM is more accurate than
the POD-DMD
- …