1,486 research outputs found
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Bayesian separation of spectral sources under non-negativity and full additivity constraints
This paper addresses the problem of separating spectral sources which are
linearly mixed with unknown proportions. The main difficulty of the problem is
to ensure the full additivity (sum-to-one) of the mixing coefficients and
non-negativity of sources and mixing coefficients. A Bayesian estimation
approach based on Gamma priors was recently proposed to handle the
non-negativity constraints in a linear mixture model. However, incorporating
the full additivity constraint requires further developments. This paper
studies a new hierarchical Bayesian model appropriate to the non-negativity and
sum-to-one constraints associated to the regressors and regression coefficients
of linear mixtures. The estimation of the unknown parameters of this model is
performed using samples generated using an appropriate Gibbs sampler. The
performance of the proposed algorithm is evaluated through simulation results
conducted on synthetic mixture models. The proposed approach is also applied to
the processing of multicomponent chemical mixtures resulting from Raman
spectroscopy.Comment: v4: minor grammatical changes; Signal Processing, 200
Cram\'er-Rao Bounds for Complex-Valued Independent Component Extraction: Determined and Piecewise Determined Mixing Models
This paper presents Cram\'er-Rao Lower Bound (CRLB) for the complex-valued
Blind Source Extraction (BSE) problem based on the assumption that the target
signal is independent of the other signals. Two instantaneous mixing models are
considered. First, we consider the standard determined mixing model used in
Independent Component Analysis (ICA) where the mixing matrix is square and
non-singular and the number of the latent sources is the same as that of the
observed signals. The CRLB for Independent Component Extraction (ICE) where the
mixing matrix is re-parameterized in order to extract only one independent
target source is computed. The target source is assumed to be non-Gaussian or
non-circular Gaussian while the other signals (background) are circular
Gaussian or non-Gaussian. The results confirm some previous observations known
for the real domain and bring new results for the complex domain. Also, the
CRLB for ICE is shown to coincide with that for ICA when the non-Gaussianity of
background is taken into account. %unless the assumed sources' distributions
are misspecified. Second, we extend the CRLB analysis to piecewise determined
mixing models. Here, the observed signals are assumed to obey the determined
mixing model within short blocks where the mixing matrices can be varying from
block to block. However, either the mixing vector or the separating vector
corresponding to the target source is assumed to be constant across the blocks.
The CRLBs for the parameters of these models bring new performance bounds for
the BSE problem.Comment: 25 pages, 8 figure
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Sunyaev-Zel'dovich clusters reconstruction in multiband bolometer camera surveys
We present a new method for the reconstruction of Sunyaev-Zel'dovich (SZ)
galaxy clusters in future SZ-survey experiments using multiband bolometer
cameras such as Olimpo, APEX, or Planck. Our goal is to optimise SZ-Cluster
extraction from our observed noisy maps. We wish to emphasize that none of the
algorithms used in the detection chain is tuned on prior knowledge on the SZ
-Cluster signal, or other astrophysical sources (Optical Spectrum, Noise
Covariance Matrix, or covariance of SZ Cluster wavelet coefficients). First, a
blind separation of the different astrophysical components which contribute to
the observations is conducted using an Independent Component Analysis (ICA)
method. Then, a recent non linear filtering technique in the wavelet domain,
based on multiscale entropy and the False Discovery Rate (FDR) method, is used
to detect and reconstruct the galaxy clusters. Finally, we use the Source
Extractor software to identify the detected clusters. The proposed method was
applied on realistic simulations of observations. As for global detection
efficiency, this new method is impressive as it provides comparable results to
Pierpaoli et al. method being however a blind algorithm. Preprint with full
resolution figures is available at the URL:
w10-dapnia.saclay.cea.fr/Phocea/Vie_des_labos/Ast/ast_visu.php?id_ast=728Comment: Submitted to A&A. 32 Pages, text onl
- …