1,619 research outputs found
Modeling Land-Cover Types Using Multiple Endmember Spectral Mixture Analysis in a Desert City
Spectral mixture analysis is probably the most commonly used approach among sub-pixel analysis techniques. This method models pixel spectra as a linear combination of spectral signatures from two or more ground components. However, spectral mixture analysis does not account for the absence of one of the surface features or spectral variation within pure materials since it utilizes an invariable set of surface features. Multiple endmember spectral mixture analysis (MESMA), which addresses these issues by allowing endmembers to vary on a per pixel basis, was employed in this study to model Landsat ETM+ reflectance in the Phoenix metropolitan area. Image endmember spectra of vegetation, soils, and impervious surfaces were collected with the use of a fine resolution Quickbird image and the pixel purity index. This study employed 204 (=3x17x4) total four-endmember models for the urban subset and 96 (=6x6x2x4) total five-endmember models for the non-urban subset to identify fractions of soil, impervious surface, vegetation, and shade. The Pearson correlation between the fraction outputs from MESMA and reference data from Quickbird 60 cm resolution data for soil, impervious, and vegetation were 0.8030, 0.8632, and 0.8496 respectively. Results from this study suggest that the MESMA approach is effective in mapping urban land covers in desert cities at sub- pixel level.
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications
Demixing problems in many areas such as hyperspectral imaging and
differential optical absorption spectroscopy (DOAS) often require finding
sparse nonnegative linear combinations of dictionary elements that match
observed data. We show how aspects of these problems, such as misalignment of
DOAS references and uncertainty in hyperspectral endmembers, can be modeled by
expanding the dictionary with grouped elements and imposing a structured
sparsity assumption that the combinations within each group should be sparse or
even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain
good solutions using convex or greedy methods, such as non-negative least
squares (NNLS) or orthogonal matching pursuit. We use penalties related to the
Hoyer measure, which is the ratio of the and norms, as sparsity
penalties to be added to the objective in NNLS-type models. For solving the
resulting nonconvex models, we propose a scaled gradient projection algorithm
that requires solving a sequence of strongly convex quadratic programs. We
discuss its close connections to convex splitting methods and difference of
convex programming. We also present promising numerical results for example
DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure
Hyperspectral image unmixing using a multiresolution sticky HDP
This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors.We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a tree-structured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scale-recursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data
Relationships of soil, grass, and bedrock over the Kaweah serpentine melange through spectral mixture analysis of AVIRIS data
A linear mixing model is used to model the spectral variability of an AVIRIS scene from the western foothills of the Sierra Nevada and calibrate these radiance data to reflectance. Five spectral endmembers from the AVIRIS data, plus an ideal 'shade' endmember were required to model the continuum reflectance of each pixel in the image. Three of the endmembers were interpreted to model the surface constituents green vegetation, dry grass, and illumination. These are the main transient surface constituents that are expected to change with shifts in land use or climatic influences and viewing conditions ('shade' only). The spectral distinction between the other three endmembers is very small, yet the spatial distributions are coherent and interpretable. These distributions cross anthropogenic and vegetation boundaries and are best interpreted as different soil types. Comparison of the fraction images to the bedrock geology maps indicates that substrate composition must be a factor contributing to the spectral properties of these endmembers. Detailed examination of the reflectance spectra of the three soil endmembers reveals that differences in the amount of ferric and ferrous iron and/or organic constituents in the soils is largely responsible for the differences in spectral properties of these endmembers
Implementation strategies for hyperspectral unmixing using Bayesian source separation
Bayesian Positive Source Separation (BPSS) is a useful unsupervised approach
for hyperspectral data unmixing, where numerical non-negativity of spectra and
abundances has to be ensured, such in remote sensing. Moreover, it is sensible
to impose a sum-to-one (full additivity) constraint to the estimated source
abundances in each pixel. Even though non-negativity and full additivity are
two necessary properties to get physically interpretable results, the use of
BPSS algorithms has been so far limited by high computation time and large
memory requirements due to the Markov chain Monte Carlo calculations. An
implementation strategy which allows one to apply these algorithms on a full
hyperspectral image, as typical in Earth and Planetary Science, is introduced.
Effects of pixel selection, the impact of such sampling on the relevance of the
estimated component spectra and abundance maps, as well as on the computation
times, are discussed. For that purpose, two different dataset have been used: a
synthetic one and a real hyperspectral image from Mars.Comment: 10 pages, 6 figures, submitted to IEEE Transactions on Geoscience and
Remote Sensing in the special issue on Hyperspectral Image and Signal
Processing (WHISPERS
Nonlinear unmixing of hyperspectral images: Models and algorithms
When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling
- …