1,113 research outputs found
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
Support matrix machine: A review
Support vector machine (SVM) is one of the most studied paradigms in the
realm of machine learning for classification and regression problems. It relies
on vectorized input data. However, a significant portion of the real-world data
exists in matrix format, which is given as input to SVM by reshaping the
matrices into vectors. The process of reshaping disrupts the spatial
correlations inherent in the matrix data. Also, converting matrices into
vectors results in input data with a high dimensionality, which introduces
significant computational complexity. To overcome these issues in classifying
matrix input data, support matrix machine (SMM) is proposed. It represents one
of the emerging methodologies tailored for handling matrix input data. The SMM
method preserves the structural information of the matrix data by using the
spectral elastic net property which is a combination of the nuclear norm and
Frobenius norm. This article provides the first in-depth analysis of the
development of the SMM model, which can be used as a thorough summary by both
novices and experts. We discuss numerous SMM variants, such as robust, sparse,
class imbalance, and multi-class classification models. We also analyze the
applications of the SMM model and conclude the article by outlining potential
future research avenues and possibilities that may motivate academics to
advance the SMM algorithm
Adaptive Locality Preserving Regression
This paper proposes a novel discriminative regression method, called adaptive
locality preserving regression (ALPR) for classification. In particular, ALPR
aims to learn a more flexible and discriminative projection that not only
preserves the intrinsic structure of data, but also possesses the properties of
feature selection and interpretability. To this end, we introduce a target
learning technique to adaptively learn a more discriminative and flexible
target matrix rather than the pre-defined strict zero-one label matrix for
regression. Then a locality preserving constraint regularized by the adaptive
learned weights is further introduced to guide the projection learning, which
is beneficial to learn a more discriminative projection and avoid overfitting.
Moreover, we replace the conventional `Frobenius norm' with the special l21
norm to constrain the projection, which enables the method to adaptively select
the most important features from the original high-dimensional data for feature
extraction. In this way, the negative influence of the redundant features and
noises residing in the original data can be greatly eliminated. Besides, the
proposed method has good interpretability for features owing to the
row-sparsity property of the l21 norm. Extensive experiments conducted on the
synthetic database with manifold structure and many real-world databases prove
the effectiveness of the proposed method.Comment: The paper has been accepted by IEEE Transactions on Circuits and
Systems for Video Technology (TCSVT), and the code can be available at
https://drive.google.com/file/d/1iNzONkRByIaUhXwdEhOkkh_0d2AAXNE8/vie
Adaptive Image Denoising by Targeted Databases
We propose a data-dependent denoising procedure to restore noisy images.
Different from existing denoising algorithms which search for patches from
either the noisy image or a generic database, the new algorithm finds patches
from a database that contains only relevant patches. We formulate the denoising
problem as an optimal filter design problem and make two contributions. First,
we determine the basis function of the denoising filter by solving a group
sparsity minimization problem. The optimization formulation generalizes
existing denoising algorithms and offers systematic analysis of the
performance. Improvement methods are proposed to enhance the patch search
process. Second, we determine the spectral coefficients of the denoising filter
by considering a localized Bayesian prior. The localized prior leverages the
similarity of the targeted database, alleviates the intensive Bayesian
computation, and links the new method to the classical linear minimum mean
squared error estimation. We demonstrate applications of the proposed method in
a variety of scenarios, including text images, multiview images and face
images. Experimental results show the superiority of the new algorithm over
existing methods.Comment: 15 pages, 13 figures, 2 tables, journa
- …