417 research outputs found
Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability
Image fusion combines data from different heterogeneous sources to obtain
more precise information about an underlying scene. Hyperspectral-multispectral
(HS-MS) image fusion is currently attracting great interest in remote sensing
since it allows the generation of high spatial resolution HS images,
circumventing the main limitation of this imaging modality. Existing HS-MS
fusion algorithms, however, neglect the spectral variability often existing
between images acquired at different time instants. This time difference causes
variations in spectral signatures of the underlying constituent materials due
to different acquisition and seasonal conditions. This paper introduces a novel
HS-MS image fusion strategy that combines an unmixing-based formulation with an
explicit parametric model for typical spectral variability between the two
images. Simulations with synthetic and real data show that the proposed
strategy leads to a significant performance improvement under spectral
variability and state-of-the-art performance otherwise
Deep Hyperspectral and Multispectral Image Fusion with Inter-image Variability
Hyperspectral and multispectral image fusion allows us to overcome the
hardware limitations of hyperspectral imaging systems inherent to their lower
spatial resolution. Nevertheless, existing algorithms usually fail to consider
realistic image acquisition conditions. This paper presents a general imaging
model that considers inter-image variability of data from heterogeneous sources
and flexible image priors. The fusion problem is stated as an optimization
problem in the maximum a posteriori framework. We introduce an original image
fusion method that, on the one hand, solves the optimization problem accounting
for inter-image variability with an iteratively reweighted scheme and, on the
other hand, that leverages light-weight CNN-based networks to learn realistic
image priors from data. In addition, we propose a zero-shot strategy to
directly learn the image-specific prior of the latent images in an unsupervised
manner. The performance of the algorithm is illustrated with real data subject
to inter-image variability.Comment: IEEE Trans. Geosci. Remote sens., to be published. Manuscript
submitted August 23, 2022; revised Dec. 15, 2022, and Mar. 13, 2023; and
accepted Apr. 07, 202
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization
Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria. The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods
Online Graph-Based Change Point Detection in Multiband Image Sequences
The automatic detection of changes or anomalies between multispectral and
hyperspectral images collected at different time instants is an active and
challenging research topic. To effectively perform change-point detection in
multitemporal images, it is important to devise techniques that are
computationally efficient for processing large datasets, and that do not
require knowledge about the nature of the changes. In this paper, we introduce
a novel online framework for detecting changes in multitemporal remote sensing
images. Acting on neighboring spectra as adjacent vertices in a graph, this
algorithm focuses on anomalies concurrently activating groups of vertices
corresponding to compact, well-connected and spectrally homogeneous image
regions. It fully benefits from recent advances in graph signal processing to
exploit the characteristics of the data that lie on irregular supports.
Moreover, the graph is estimated directly from the images using superpixel
decomposition algorithms. The learning algorithm is scalable in the sense that
it is efficient and spatially distributed. Experiments illustrate the detection
and localization performance of the method
- …