92 research outputs found
Hyperspectral and Multispectral Image Fusion using Optimized Twin Dictionaries
Spectral or spatial dictionary has been widely used in fusing low-spatial-resolution hyperspectral (LH) images and high-spatial-resolution multispectral (HM) images. However, only using spectral dictionary is insufficient for preserving spatial information, and vice versa. To address this problem, a new LH and HM image fusion method termed OTD using optimized twin dictionaries is proposed in this paper. The fusion problem of OTD is formulated analytically in the framework of sparse representation, as an optimization of twin spectral-spatial dictionaries and their corresponding sparse coefficients. More specifically, the spectral dictionary representing the generalized spectrums and its spectral sparse coefficients are optimized by utilizing the observed LH and HM images in the spectral domain; and the spatial dictionary representing the spatial information and its spatial sparse coefficients are optimized by modeling the rest of high-frequency information in the spatial domain. In addition, without non-negative constraints, the alternating direction methods of multipliers (ADMM) are employed to implement the above optimization process. Comparison results with the related state-of-the-art fusion methods on various datasets demonstrate that our proposed OTD method achieves a better fusion performance in both spatial and spectral domains
Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image Super-resolution
Hyperspectral image has become increasingly crucial due to its abundant
spectral information. However, It has poor spatial resolution with the
limitation of the current imaging mechanism. Nowadays, many convolutional
neural networks have been proposed for the hyperspectral image super-resolution
problem. However, convolutional neural network (CNN) based methods only
consider the local information instead of the global one with the limited
kernel size of receptive field in the convolution operation. In this paper, we
design a network based on the transformer for fusing the low-resolution
hyperspectral images and high-resolution multispectral images to obtain the
high-resolution hyperspectral images. Thanks to the representing ability of the
transformer, our approach is able to explore the intrinsic relationships of
features globally. Furthermore, considering the LR-HSIs hold the main spectral
structure, the network focuses on the spatial detail estimation releasing from
the burden of reconstructing the whole data. It reduces the mapping space of
the proposed network, which enhances the final performance. Various experiments
and quality indexes show our approach's superiority compared with other
state-of-the-art methods
Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral Super-Resolution
The recent advancement of deep learning techniques has made great progress on
hyperspectral image super-resolution (HSI-SR). Yet the development of
unsupervised deep networks remains challenging for this task. To this end, we
propose a novel coupled unmixing network with a cross-attention mechanism,
CUCaNet for short, to enhance the spatial resolution of HSI by means of
higher-spatial-resolution multispectral image (MSI). Inspired by coupled
spectral unmixing, a two-stream convolutional autoencoder framework is taken as
backbone to jointly decompose MS and HS data into a spectrally meaningful basis
and corresponding coefficients. CUCaNet is capable of adaptively learning
spectral and spatial response functions from HS-MS correspondences by enforcing
reasonable consistency assumptions on the networks. Moreover, a cross-attention
module is devised to yield more effective spatial-spectral information transfer
in networks. Extensive experiments are conducted on three widely-used HS-MS
datasets in comparison with state-of-the-art HSI-SR models, demonstrating the
superiority of the CUCaNet in the HSI-SR application. Furthermore, the codes
and datasets will be available at:
https://github.com/danfenghong/ECCV2020_CUCaNet
Multi-scale spatial fusion and regularization induced unsupervised auxiliary task CNN model for deep super-resolution of hyperspectral image.
Hyperspectral images (HSI) features rich spectral information in many narrow bands but at a cost of a relatively low spatial resolution. As such, various methods have been developed for enhancing the spatial resolution of the low-resolution HSI (Lr-HSI) by fusing it with high-resolution multispectral images (Hr-MSI). The difference in spectrum range and spatial dimensions between the Lr-HSI and Hr-SI have been fundamental but challenging for multispectral/hyperspectral (MS/HS) fusion. In this paper, a multi-scale spatial fusion and regularization induced auxiliary task (MSAT) based CNN model is proposed for deep super-resolution of HSI, where a Lr-HSI is fused with a Hr-MSI to reconstruct a high-resolution HSI (Hr-HSI) counterpart. The multi-scale fusion is used to efficiently address the discrepancy in spatial resolutions between two inputs. Based on the general assumption that the acquired Hr-MSI and the reconstructed Hr-HSI share similar underlying characteristics, the auxiliary task is proposed to learn a representation for improved generality of the model and reduced overfitting. Experimental results on three public datasets have validated the effectiveness of our approach in comparison with several state-of-the-art methods
Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-based algorithms
We propose a novel approach for hyperspectral super-resolution, that is based
on low-rank tensor approximation for a coupled low-rank multilinear (Tucker)
model. We show that the correct recovery holds for a wide range of multilinear
ranks. For coupled tensor approximation, we propose two SVD-based algorithms
that are simple and fast, but with a performance comparable to the
state-of-the-art methods. The approach is applicable to the case of unknown
spatial degradation and to the pansharpening problem.Comment: IEEE Transactions on Signal Processing, Institute of Electrical and
Electronics Engineers, in Pres
Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability
Image fusion combines data from different heterogeneous sources to obtain
more precise information about an underlying scene. Hyperspectral-multispectral
(HS-MS) image fusion is currently attracting great interest in remote sensing
since it allows the generation of high spatial resolution HS images,
circumventing the main limitation of this imaging modality. Existing HS-MS
fusion algorithms, however, neglect the spectral variability often existing
between images acquired at different time instants. This time difference causes
variations in spectral signatures of the underlying constituent materials due
to different acquisition and seasonal conditions. This paper introduces a novel
HS-MS image fusion strategy that combines an unmixing-based formulation with an
explicit parametric model for typical spectral variability between the two
images. Simulations with synthetic and real data show that the proposed
strategy leads to a significant performance improvement under spectral
variability and state-of-the-art performance otherwise
- …