168 research outputs found
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing
Hyperspectral imaging, also known as image spectrometry, is a landmark
technique in geoscience and remote sensing (RS). In the past decade, enormous
efforts have been made to process and analyze these hyperspectral (HS) products
mainly by means of seasoned experts. However, with the ever-growing volume of
data, the bulk of costs in manpower and material resources poses new challenges
on reducing the burden of manual labor and improving efficiency. For this
reason, it is, therefore, urgent to develop more intelligent and automatic
approaches for various HS RS applications. Machine learning (ML) tools with
convex optimization have successfully undertaken the tasks of numerous
artificial intelligence (AI)-related applications. However, their ability in
handling complex practical problems remains limited, particularly for HS data,
due to the effects of various spectral variabilities in the process of HS
imaging and the complexity and redundancy of higher dimensional HS signals.
Compared to the convex models, non-convex modeling, which is capable of
characterizing more complex real scenes and providing the model
interpretability technically and theoretically, has been proven to be a
feasible solution to reduce the gap between challenging HS vision tasks and
currently advanced intelligent data processing models
A convex formulation for hyperspectral image superresolution via subspace-based regularization
Hyperspectral remote sensing images (HSIs) usually have high spectral
resolution and low spatial resolution. Conversely, multispectral images (MSIs)
usually have low spectral and high spatial resolutions. The problem of
inferring images which combine the high spectral and high spatial resolutions
of HSIs and MSIs, respectively, is a data fusion problem that has been the
focus of recent active research due to the increasing availability of HSIs and
MSIs retrieved from the same geographical area.
We formulate this problem as the minimization of a convex objective function
containing two quadratic data-fitting terms and an edge-preserving regularizer.
The data-fitting terms account for blur, different resolutions, and additive
noise. The regularizer, a form of vector Total Variation, promotes
piecewise-smooth solutions with discontinuities aligned across the
hyperspectral bands.
The downsampling operator accounting for the different spatial resolutions,
the non-quadratic and non-smooth nature of the regularizer, and the very large
size of the HSI to be estimated lead to a hard optimization problem. We deal
with these difficulties by exploiting the fact that HSIs generally "live" in a
low-dimensional subspace and by tailoring the Split Augmented Lagrangian
Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction
Method of Multipliers (ADMM), to this optimization problem, by means of a
convenient variable splitting. The spatial blur and the spectral linear
operators linked, respectively, with the HSI and MSI acquisition processes are
also estimated, and we obtain an effective algorithm that outperforms the
state-of-the-art, as illustrated in a series of experiments with simulated and
real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe
Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super-Resolution
Due to the limitations of hyperspectral imaging systems, hyperspectral
imagery (HSI) often suffers from poor spatial resolution, thus hampering many
applications of the imagery. Hyperspectral super-resolution refers to fusing
HSI and MSI to generate an image with both high spatial and high spectral
resolutions. Recently, several new methods have been proposed to solve this
fusion problem, and most of these methods assume that the prior information of
the Point Spread Function (PSF) and Spectral Response Function (SRF) are known.
However, in practice, this information is often limited or unavailable. In this
work, an unsupervised deep learning-based fusion method - HyCoNet - that can
solve the problems in HSI-MSI fusion without the prior PSF and SRF information
is proposed. HyCoNet consists of three coupled autoencoder nets in which the
HSI and MSI are unmixed into endmembers and abundances based on the linear
unmixing model. Two special convolutional layers are designed to act as a
bridge that coordinates with the three autoencoder nets, and the PSF and SRF
parameters are learned adaptively in the two convolution layers during the
training process. Furthermore, driven by the joint loss function, the proposed
method is straightforward and easily implemented in an end-to-end training
manner. The experiments performed in the study demonstrate that the proposed
method performs well and produces robust results for different datasets and
arbitrary PSFs and SRFs
Fusing Multiple Multiband Images
We consider the problem of fusing an arbitrary number of multiband, i.e.,
panchromatic, multispectral, or hyperspectral, images belonging to the same
scene. We use the well-known forward observation and linear mixture models with
Gaussian perturbations to formulate the maximum-likelihood estimator of the
endmember abundance matrix of the fused image. We calculate the Fisher
information matrix for this estimator and examine the conditions for the
uniqueness of the estimator. We use a vector total-variation penalty term
together with nonnegativity and sum-to-one constraints on the endmember
abundances to regularize the derived maximum-likelihood estimation problem. The
regularization facilitates exploiting the prior knowledge that natural images
are mostly composed of piecewise smooth regions with limited abrupt changes,
i.e., edges, as well as coping with potential ill-posedness of the fusion
problem. We solve the resultant convex optimization problem using the
alternating direction method of multipliers. We utilize the circular
convolution theorem in conjunction with the fast Fourier transform to alleviate
the computational complexity of the proposed algorithm. Experiments with
multiband images constructed from real hyperspectral datasets reveal the
superior performance of the proposed algorithm in comparison with the
state-of-the-art algorithms, which need to be used in tandem to fuse more than
two multiband images
Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral Super-Resolution
The recent advancement of deep learning techniques has made great progress on
hyperspectral image super-resolution (HSI-SR). Yet the development of
unsupervised deep networks remains challenging for this task. To this end, we
propose a novel coupled unmixing network with a cross-attention mechanism,
CUCaNet for short, to enhance the spatial resolution of HSI by means of
higher-spatial-resolution multispectral image (MSI). Inspired by coupled
spectral unmixing, a two-stream convolutional autoencoder framework is taken as
backbone to jointly decompose MS and HS data into a spectrally meaningful basis
and corresponding coefficients. CUCaNet is capable of adaptively learning
spectral and spatial response functions from HS-MS correspondences by enforcing
reasonable consistency assumptions on the networks. Moreover, a cross-attention
module is devised to yield more effective spatial-spectral information transfer
in networks. Extensive experiments are conducted on three widely-used HS-MS
datasets in comparison with state-of-the-art HSI-SR models, demonstrating the
superiority of the CUCaNet in the HSI-SR application. Furthermore, the codes
and datasets will be available at:
https://github.com/danfenghong/ECCV2020_CUCaNet
Hyperspectral and Multispectral Image Fusion using Optimized Twin Dictionaries
Spectral or spatial dictionary has been widely used in fusing low-spatial-resolution hyperspectral (LH) images and high-spatial-resolution multispectral (HM) images. However, only using spectral dictionary is insufficient for preserving spatial information, and vice versa. To address this problem, a new LH and HM image fusion method termed OTD using optimized twin dictionaries is proposed in this paper. The fusion problem of OTD is formulated analytically in the framework of sparse representation, as an optimization of twin spectral-spatial dictionaries and their corresponding sparse coefficients. More specifically, the spectral dictionary representing the generalized spectrums and its spectral sparse coefficients are optimized by utilizing the observed LH and HM images in the spectral domain; and the spatial dictionary representing the spatial information and its spatial sparse coefficients are optimized by modeling the rest of high-frequency information in the spatial domain. In addition, without non-negative constraints, the alternating direction methods of multipliers (ADMM) are employed to implement the above optimization process. Comparison results with the related state-of-the-art fusion methods on various datasets demonstrate that our proposed OTD method achieves a better fusion performance in both spatial and spectral domains
Image fusion for spatial enhancement of hyperspectral image via pixel group based non-local sparse representation
Restricted by technical and budget constraints, hyperspectral images (HSIs) are usually obtained with low spatial resolution. In order to improve the spatial resolution of a given hyperspectral image, a new spatial and spectral image fusion approach via pixel group based non-local sparse representation is proposed, which exploits the spectral sparsity and spectral non-local self-similarity of the hyperspectral image. The proposed approach fuses the hyperspectral image with a high-spatial-resolution multispectral image of the same scene to obtain a hyperspectral image with high spatial and spectral resolutions. The input hyperspectral image is used to train the spectral dictionary, while the sparse codes of the desired HSI are estimated by jointly encoding the similar pixels in each pixel group extracted from the high-spatial-resolution multispectral image. To improve the accuracy of the pixel group based non-local sparse representation, the similar pixels in a pixel group are selected by utilizing both the spectral and spatial information. The performance of the proposed approach is tested on two remote sensing image datasets. Experimental results suggest that the proposed method outperforms a number of sparse representation based fusion techniques, and can preserve the spectral information while recovering the spatial details under large magnification factors
Hyperspectral Image Super-Resolution Using Optimization and DCNN-Based Methods
Reconstructing a high-resolution (HR) hyperspectral (HS) image from the observed low-resolution (LR) hyperspectral image or a high-resolution multispectral (RGB) image obtained using the exiting imaging cameras is an important research topic for capturing comprehensive scene information in both spatial and spectral domains. The HR-HS hyperspectral image reconstruction mainly consists of two research strategies: optimization-based and the deep convolutional neural network-based learning methods. The optimization-based approaches estimate HR-HS image via minimizing the reconstruction errors of the available low-resolution hyperspectral and high-resolution multispectral images with different constrained prior knowledge such as representation sparsity, spectral physical properties, spatial smoothness, and so on. Recently, deep convolutional neural network (DCNN) has been applied to resolution enhancement of natural images and is proven to achieve promising performance. This chapter provides a comprehensive description of not only the conventional optimization-based methods but also the recently investigated DCNN-based learning methods for HS image super-resolution, which mainly include spectral reconstruction CNN and spatial and spectral fusion CNN. Experiment results on benchmark datasets have been shown for validating effectiveness of HS image super-resolution in both quantitative values and visual effect
- …