154 research outputs found
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
Hyperspectral imaging can help better understand the characteristics of
different materials, compared with traditional image systems. However, only
high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS)
images can generally be captured at video rate in practice. In this paper, we
propose a model-based deep learning approach for merging an HrMS and LrHS
images to generate a high-resolution hyperspectral (HrHS) image. In specific,
we construct a novel MS/HS fusion model which takes the observation models of
low-resolution images and the low-rankness knowledge along the spectral mode of
HrHS image into consideration. Then we design an iterative algorithm to solve
the model by exploiting the proximal gradient method. And then, by unfolding
the designed algorithm, we construct a deep network, called MS/HS Fusion Net,
with learning the proximal operators and model parameters by convolutional
neural networks. Experimental results on simulated and real data substantiate
the superiority of our method both visually and quantitatively as compared with
state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure
Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing
International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges
Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art
The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several
Quantifying the Effect of Registration Error on Spatio-Temporal Fusion
It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE
OBSUM: An object-based spatial unmixing model for spatiotemporal fusion of remote sensing images
Spatiotemporal fusion aims to improve both the spatial and temporal
resolution of remote sensing images, thus facilitating time-series analysis at
a fine spatial scale. However, there are several important issues that limit
the application of current spatiotemporal fusion methods. First, most
spatiotemporal fusion methods are based on pixel-level computation, which
neglects the valuable object-level information of the land surface. Moreover,
many existing methods cannot accurately retrieve strong temporal changes
between the available high-resolution image at base date and the predicted one.
This study proposes an Object-Based Spatial Unmixing Model (OBSUM), which
incorporates object-based image analysis and spatial unmixing, to overcome the
two abovementioned problems. OBSUM consists of one preprocessing step and three
fusion steps, i.e., object-level unmixing, object-level residual compensation,
and pixel-level residual compensation. OBSUM can be applied using only one fine
image at the base date and one coarse image at the prediction date, without the
need of a coarse image at the base date. The performance of OBSUM was compared
with five representative spatiotemporal fusion methods. The experimental
results demonstrated that OBSUM outperformed other methods in terms of both
accuracy indices and visual effects over time-series. Furthermore, OBSUM also
achieved satisfactory results in two typical remote sensing applications.
Therefore, it has great potential to generate accurate and high-resolution
time-series observations for supporting various remote sensing applications
Novel pattern recognition methods for classification and detection in remote sensing and power generation applications
Novel pattern recognition methods for classification and detection in remote sensing and power generation application
- …