94 research outputs found
Complex-valued neural network for hyperspectral single image super resolution
Remote sensing applications are nowadays widely spread in various industrial fields, such as mineral and water exploration, geo-structural mapping, and natural hazards analysis. These applications require that the performance of image processing tasks, such as segmentation, object detection, and classification, to be of high accuracy. This can be achieved with relative ease if the given image has high spatial resolution as well as high spectral resolution. However, due to sensor limitations, spatial and spectral resolutions have an inherently inverse relationship and cannot be achieved simultaneously. Hyperspectral Images (HSI) have high spectral resolution, but suffer from low spatial resolution, which hinders utilizing them to their full potential. One of the most widely used approaches to enhance spatial resolution is Single Image Super Resolution (SISR) techniques. In the recent years, Deep Convolutional Neural Networks (DCNNs) have been widely used for HSI enhancement, as they have shown superiority over other traditional methods. Nonetheless, researches still aspire to enhance HSI quality further while overcoming common challenges, such as spectral distortions. Research has shown that properties of natural images can be easily captured using complex numbers. However, this has not been thoroughly investigated from the perspective of HSI SISR. In this paper, we propose a variation of a Complex Valued Neural Network (CVNN) architecture for HSI spatial enhancement. The benefits of approaching the problem from a frequency domain perspective will be answered and the proposed network will be compared to its real counterpart and other state-of-the-art approaches. The evaluation and comparison will be recorded qualitatively by visual comparison, and quantitatively using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Spectral Angle Mapper (SAM)
Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead
Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area
Deep Hyperspectral and Multispectral Image Fusion with Inter-image Variability
Hyperspectral and multispectral image fusion allows us to overcome the
hardware limitations of hyperspectral imaging systems inherent to their lower
spatial resolution. Nevertheless, existing algorithms usually fail to consider
realistic image acquisition conditions. This paper presents a general imaging
model that considers inter-image variability of data from heterogeneous sources
and flexible image priors. The fusion problem is stated as an optimization
problem in the maximum a posteriori framework. We introduce an original image
fusion method that, on the one hand, solves the optimization problem accounting
for inter-image variability with an iteratively reweighted scheme and, on the
other hand, that leverages light-weight CNN-based networks to learn realistic
image priors from data. In addition, we propose a zero-shot strategy to
directly learn the image-specific prior of the latent images in an unsupervised
manner. The performance of the algorithm is illustrated with real data subject
to inter-image variability.Comment: IEEE Trans. Geosci. Remote sens., to be published. Manuscript
submitted August 23, 2022; revised Dec. 15, 2022, and Mar. 13, 2023; and
accepted Apr. 07, 202
A review of spatial enhancement of hyperspectral remote sensing imaging techniques
Remote sensing technology has undeniable importance in various industrial applications, such as mineral exploration, plant detection, defect detection in aerospace and shipbuilding, and optical gas imaging, to name a few. Remote sensing technology has been continuously evolving, offering a range of image modalities that can facilitate the aforementioned applications. One such modality is Hyperspectral Imaging (HSI). Unlike Multispectral Images (MSI) and natural images, HSI consist of hundreds of bands. Despite their high spectral resolution, HSI suffer from low spatial resolution in comparison to their MSI counterpart, which hinders the utilization of their full potential. Therefore, spatial enhancement, or Super Resolution (SR), of HSI is a classical problem that has been gaining rapid attention over the past two decades. The literature is rich with various SR algorithms that enhance the spatial resolution of HSI while preserving their spectral fidelity. This paper reviews and discusses the most important algorithms relevant to this area of research between 2002-2022, along with the most frequently used datasets, HSI sensors, and quality metrics. Meta-analysis are drawn based on the aforementioned information, which is used as a foundation that summarizes the state of the field in a way that bridges the past and the present, identifies the current gap in it, and recommends possible future directions
Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution
Since the number of incident energies is limited, it is difficult to directly
acquire hyperspectral images (HSI) with high spatial resolution. Considering
the high dimensionality and correlation of HSI, super-resolution (SR) of HSI
remains a challenge in the absence of auxiliary high-resolution images.
Furthermore, it is very important to extract the spatial features effectively
and make full use of the spectral information. This paper proposes a novel HSI
super-resolution algorithm, termed dual-domain network based on hybrid
convolution (SRDNet). Specifically, a dual-domain network is designed to fully
exploit the spatial-spectral and frequency information among the hyper-spectral
data. To capture inter-spectral self-similarity, a self-attention learning
mechanism (HSL) is devised in the spatial domain. Meanwhile the pyramid
structure is applied to increase the acceptance field of attention, which
further reinforces the feature representation ability of the network. Moreover,
to further improve the perceptual quality of HSI, a frequency loss(HFL) is
introduced to optimize the model in the frequency domain. The dynamic weighting
mechanism drives the network to gradually refine the generated frequency and
excessive smoothing caused by spatial loss. Finally, In order to better fully
obtain the mapping relationship between high-resolution space and
low-resolution space, a hybrid module of 2D and 3D units with progressive
upsampling strategy is utilized in our method. Experiments on a widely used
benchmark dataset illustrate that the proposed SRDNet method enhances the
texture information of HSI and is superior to state-of-the-art methods
Unsupervised Hyperspectral and Multispectral Images Fusion Based on the Cycle Consistency
Hyperspectral images (HSI) with abundant spectral information reflected
materials property usually perform low spatial resolution due to the hardware
limits. Meanwhile, multispectral images (MSI), e.g., RGB images, have a high
spatial resolution but deficient spectral signatures. Hyperspectral and
multispectral image fusion can be cost-effective and efficient for acquiring
both high spatial resolution and high spectral resolution images. Many of the
conventional HSI and MSI fusion algorithms rely on known spatial degradation
parameters, i.e., point spread function, spectral degradation parameters,
spectral response function, or both of them. Another class of deep
learning-based models relies on the ground truth of high spatial resolution HSI
and needs large amounts of paired training images when working in a supervised
manner. Both of these models are limited in practical fusion scenarios. In this
paper, we propose an unsupervised HSI and MSI fusion model based on the cycle
consistency, called CycFusion. The CycFusion learns the domain transformation
between low spatial resolution HSI (LrHSI) and high spatial resolution MSI
(HrMSI), and the desired high spatial resolution HSI (HrHSI) are considered to
be intermediate feature maps in the transformation networks. The CycFusion can
be trained with the objective functions of marginal matching in single
transform and cycle consistency in double transforms. Moreover, the estimated
PSF and SRF are embedded in the model as the pre-training weights, which
further enhances the practicality of our proposed model. Experiments conducted
on several datasets show that our proposed model outperforms all compared
unsupervised fusion methods. The codes of this paper will be available at this
address: https: //github.com/shuaikaishi/CycFusion for reproducibility
A Spectral Diffusion Prior for Hyperspectral Image Super-Resolution
Fusion-based hyperspectral image (HSI) super-resolution aims to produce a
high-spatial-resolution HSI by fusing a low-spatial-resolution HSI and a
high-spatial-resolution multispectral image. Such a HSI super-resolution
process can be modeled as an inverse problem, where the prior knowledge is
essential for obtaining the desired solution. Motivated by the success of
diffusion models, we propose a novel spectral diffusion prior for fusion-based
HSI super-resolution. Specifically, we first investigate the spectrum
generation problem and design a spectral diffusion model to model the spectral
data distribution. Then, in the framework of maximum a posteriori, we keep the
transition information between every two neighboring states during the reverse
generative process, and thereby embed the knowledge of trained spectral
diffusion model into the fusion problem in the form of a regularization term.
At last, we treat each generation step of the final optimization problem as its
subproblem, and employ the Adam to solve these subproblems in a reverse
sequence. Experimental results conducted on both synthetic and real datasets
demonstrate the effectiveness of the proposed approach. The code of the
proposed approach will be available on https://github.com/liuofficial/SDP
Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead
Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area
A Theoretically Guaranteed Quaternion Weighted Schatten p-norm Minimization Method for Color Image Restoration
Inspired by the fact that the matrix formulated by nonlocal similar patches
in a natural image is of low rank, the rank approximation issue have been
extensively investigated over the past decades, among which weighted nuclear
norm minimization (WNNM) and weighted Schatten -norm minimization (WSNM) are
two prevailing methods have shown great superiority in various image
restoration (IR) problems. Due to the physical characteristic of color images,
color image restoration (CIR) is often a much more difficult task than its
grayscale image counterpart. However, when applied to CIR, the traditional
WNNM/WSNM method only processes three color channels individually and fails to
consider their cross-channel correlations. Very recently, a quaternion-based
WNNM approach (QWNNM) has been developed to mitigate this issue, which is
capable of representing the color image as a whole in the quaternion domain and
preserving the inherent correlation among the three color channels. Despite its
empirical success, unfortunately, the convergence behavior of QWNNM has not
been strictly studied yet. In this paper, on the one side, we extend the WSNM
into quaternion domain and correspondingly propose a novel quaternion-based
WSNM model (QWSNM) for tackling the CIR problems. Extensive experiments on two
representative CIR tasks, including color image denoising and deblurring,
demonstrate that the proposed QWSNM method performs favorably against many
state-of-the-art alternatives, in both quantitative and qualitative
evaluations. On the other side, more importantly, we preliminarily provide a
theoretical convergence analysis, that is, by modifying the quaternion
alternating direction method of multipliers (QADMM) through a simple
continuation strategy, we theoretically prove that both the solution sequences
generated by the QWNNM and QWSNM have fixed-point convergence guarantees.Comment: 46 pages, 10 figures; references adde
Hyperspectral and Multispectral Image Fusion Using the Conditional Denoising Diffusion Probabilistic Model
Hyperspectral images (HSI) have a large amount of spectral information
reflecting the characteristics of matter, while their spatial resolution is low
due to the limitations of imaging technology. Complementary to this are
multispectral images (MSI), e.g., RGB images, with high spatial resolution but
insufficient spectral bands. Hyperspectral and multispectral image fusion is a
technique for acquiring ideal images that have both high spatial and high
spectral resolution cost-effectively. Many existing HSI and MSI fusion
algorithms rely on known imaging degradation models, which are often not
available in practice. In this paper, we propose a deep fusion method based on
the conditional denoising diffusion probabilistic model, called DDPM-Fus.
Specifically, the DDPM-Fus contains the forward diffusion process which
gradually adds Gaussian noise to the high spatial resolution HSI (HrHSI) and
another reverse denoising process which learns to predict the desired HrHSI
from its noisy version conditioning on the corresponding high spatial
resolution MSI (HrMSI) and low spatial resolution HSI (LrHSI). Once the
training is completes, the proposed DDPM-Fus implements the reverse process on
the test HrMSI and LrHSI to generate the fused HrHSI. Experiments conducted on
one indoor and two remote sensing datasets show the superiority of the proposed
model when compared with other advanced deep learningbased fusion methods. The
codes of this work will be opensourced at this address:
https://github.com/shuaikaishi/DDPMFus for reproducibility
- …