803 research outputs found
Guided Deep Decoder: Unsupervised Image Pair Fusion
The fusion of input and guidance images that have a tradeoff in their
information (e.g., hyperspectral and RGB image fusion or pansharpening) can be
interpreted as one general problem. However, previous studies applied a
task-specific handcrafted prior and did not address the problems with a unified
approach. To address this limitation, in this study, we propose a guided deep
decoder network as a general prior. The proposed network is composed of an
encoder-decoder network that exploits multi-scale features of a guidance image
and a deep decoder network that generates an output image. The two networks are
connected by feature refinement units to embed the multi-scale features of the
guidance image into the deep decoder network. The proposed network allows the
network parameters to be optimized in an unsupervised way without training
data. Our results show that the proposed network can achieve state-of-the-art
performance in various image fusion problems.Comment: ECCV 202
Information Loss-Guided Multi-Resolution Image Fusion
Spatial downscaling is an ill-posed, inverse problem, and information loss (IL) inevitably exists in the predictions produced by any downscaling technique. The recently popularized area-to-point kriging (ATPK)-based downscaling approach can account for the size of support and the point spread function (PSF) of the sensor, and moreover, it has the appealing advantage of the perfect coherence property. In this article, based on the advantages of ATPK and the conceptualization of IL, an IL-guided image fusion (ILGIF) approach is proposed. ILGIF uses the fine spatial resolution images acquired in other wavelengths to predict the IL in ATPK predictions based on the geographically weighted regression (GWR) model, which accounts for the spatial variation in land cover. ILGIF inherits all the advantages of ATPK, and its prediction has perfect coherence with the original coarse spatial resolution data which can be demonstrated mathematically. ILGIF was validated using two data sets and was shown in each case to predict downscaled images more accurately than the compared benchmark methods
Model Inspired Autoencoder for Unsupervised Hyperspectral Image Super-Resolution
This paper focuses on hyperspectral image (HSI) super-resolution that aims to
fuse a low-spatial-resolution HSI and a high-spatial-resolution multispectral
image to form a high-spatial-resolution HSI (HR-HSI). Existing deep
learning-based approaches are mostly supervised that rely on a large number of
labeled training samples, which is unrealistic. The commonly used model-based
approaches are unsupervised and flexible but rely on hand-craft priors.
Inspired by the specific properties of model, we make the first attempt to
design a model inspired deep network for HSI super-resolution in an
unsupervised manner. This approach consists of an implicit autoencoder network
built on the target HR-HSI that treats each pixel as an individual sample. The
nonnegative matrix factorization (NMF) of the target HR-HSI is integrated into
the autoencoder network, where the two NMF parts, spectral and spatial
matrices, are treated as decoder parameters and hidden outputs respectively. In
the encoding stage, we present a pixel-wise fusion model to estimate hidden
outputs directly, and then reformulate and unfold the model's algorithm to form
the encoder network. With the specific architecture, the proposed network is
similar to a manifold prior-based model, and can be trained patch by patch
rather than the entire image. Moreover, we propose an additional unsupervised
network to estimate the point spread function and spectral response function.
Experimental results conducted on both synthetic and real datasets demonstrate
the effectiveness of the proposed approach
- …