2,339 research outputs found
Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
Hyperspectral imaging can help better understand the characteristics of
different materials, compared with traditional image systems. However, only
high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS)
images can generally be captured at video rate in practice. In this paper, we
propose a model-based deep learning approach for merging an HrMS and LrHS
images to generate a high-resolution hyperspectral (HrHS) image. In specific,
we construct a novel MS/HS fusion model which takes the observation models of
low-resolution images and the low-rankness knowledge along the spectral mode of
HrHS image into consideration. Then we design an iterative algorithm to solve
the model by exploiting the proximal gradient method. And then, by unfolding
the designed algorithm, we construct a deep network, called MS/HS Fusion Net,
with learning the proximal operators and model parameters by convolutional
neural networks. Experimental results on simulated and real data substantiate
the superiority of our method both visually and quantitatively as compared with
state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure
GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection
Change detection (CD) is an important application of remote sensing, which
provides timely change information about large-scale Earth surface. With the
emergence of hyperspectral imagery, CD technology has been greatly promoted, as
hyperspectral data with the highspectral resolution are capable of detecting
finer changes than using the traditional multispectral imagery. Nevertheless,
the high dimension of hyperspectral data makes it difficult to implement
traditional CD algorithms. Besides, endmember abundance information at subpixel
level is often not fully utilized. In order to better handle high dimension
problem and explore abundance information, this paper presents a General
End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image
change detection (HSI-CD). The main contributions of this work are threefold:
1) Mixed-affinity matrix that integrates subpixel representation is introduced
to mine more cross-channel gradient features and fuse multi-source information;
2) 2-D CNN is designed to learn the discriminative features effectively from
multi-source data at a higher level and enhance the generalization ability of
the proposed CD algorithm; 3) A new HSI-CD data set is designed for the
objective comparison of different methods. Experimental results on real
hyperspectral data sets demonstrate the proposed method outperforms most of the
state-of-the-arts
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image Super-resolution
Hyperspectral image has become increasingly crucial due to its abundant
spectral information. However, It has poor spatial resolution with the
limitation of the current imaging mechanism. Nowadays, many convolutional
neural networks have been proposed for the hyperspectral image super-resolution
problem. However, convolutional neural network (CNN) based methods only
consider the local information instead of the global one with the limited
kernel size of receptive field in the convolution operation. In this paper, we
design a network based on the transformer for fusing the low-resolution
hyperspectral images and high-resolution multispectral images to obtain the
high-resolution hyperspectral images. Thanks to the representing ability of the
transformer, our approach is able to explore the intrinsic relationships of
features globally. Furthermore, considering the LR-HSIs hold the main spectral
structure, the network focuses on the spatial detail estimation releasing from
the burden of reconstructing the whole data. It reduces the mapping space of
the proposed network, which enhances the final performance. Various experiments
and quality indexes show our approach's superiority compared with other
state-of-the-art methods
Model Inspired Autoencoder for Unsupervised Hyperspectral Image Super-Resolution
This paper focuses on hyperspectral image (HSI) super-resolution that aims to
fuse a low-spatial-resolution HSI and a high-spatial-resolution multispectral
image to form a high-spatial-resolution HSI (HR-HSI). Existing deep
learning-based approaches are mostly supervised that rely on a large number of
labeled training samples, which is unrealistic. The commonly used model-based
approaches are unsupervised and flexible but rely on hand-craft priors.
Inspired by the specific properties of model, we make the first attempt to
design a model inspired deep network for HSI super-resolution in an
unsupervised manner. This approach consists of an implicit autoencoder network
built on the target HR-HSI that treats each pixel as an individual sample. The
nonnegative matrix factorization (NMF) of the target HR-HSI is integrated into
the autoencoder network, where the two NMF parts, spectral and spatial
matrices, are treated as decoder parameters and hidden outputs respectively. In
the encoding stage, we present a pixel-wise fusion model to estimate hidden
outputs directly, and then reformulate and unfold the model's algorithm to form
the encoder network. With the specific architecture, the proposed network is
similar to a manifold prior-based model, and can be trained patch by patch
rather than the entire image. Moreover, we propose an additional unsupervised
network to estimate the point spread function and spectral response function.
Experimental results conducted on both synthetic and real datasets demonstrate
the effectiveness of the proposed approach
- …