517 research outputs found

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

    A novel fusion framework embedded with zero-shot super-resolution and multivariate autoregression for precipitable water vapor across the continental Europe

    Get PDF
    Precipitable water vapor (PWV), as the most abundant greenhouse gas, significantly impacts the evapotranspiration process and thus the global climate. However, the applicability of mainstream satellite PWV products is limited by the tradeoff between spatial and temporal resolutions, as well as some external factors such as cloud contamination. In this study, we proposed a novel PWV spatio-temporal fusion framework based on the zero-shot super-resolution and the multivariate autoregression models (ZSSR-ARF) to improve the accuracy and continuity of PWV. The framework is implemented in a way that the satellite-derived observations (MOD05) are fused with the reanalysis data (ERA5) to generate accurate and seamless PWV of high spatio-temporal resolution (0.01°, daily) across the European continent from 2001 to 2021. Firstly, the ZSSR approach is used to enhance the spatial resolution of ERA5 PWV based on the internal recurrence of image information. Secondly, the optimal ERA5-MOD05 image pairs are selected based on the image similarity as inputs to improve the fusion accuracy. Thirdly, the framework develops a multivariate autoregressive fusion approach to allocate weights adaptively for the high-resolution image prediction, which primely addresses the non-stationarity and autocorrelation of PWV. The results reveal that the accuracies of fused PWV are consistent with those of the GPS retrievals (r = 0.82–0.95 and RMSE = 2.21–4.01 mm), showing an enhancement in the accuracy and continuity compared to the original MODIS PWV. The ZSSR-ARF fusion framework outperforms the other methods with R2^2 improved by over 24% and RMSE reduced by over 0.61 mm. Furthermore, the fused PWV exhibits similar temporal consistency (mean difference of 0.40 mm and DSTD of 3.22 mm) to the reliable ERA5 products, and substantial increasing trends (mean of 0.057 mm/year and over 0.1 mm/year near the southern and western coasts) are observed over the European continent. As the accuracy and continuity of PWV are improved, the outcome of this paper has potential for climatic analyses during the land-atmosphere cycle process

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Geographically Weighted Spatial Unmixing for Spatiotemporal Fusion

    Get PDF
    Spatiotemporal fusion is a technique applied to create images with both fine spatial and temporal resolutions by blending images with different spatial and temporal resolutions. Spatial unmixing (SU) is a widely used approach for spatiotemporal fusion, which requires only the minimum number of input images. However, ignorance of spatial variation in land cover between pixels is a common issue in existing SU methods. For example, all coarse neighbors in a local window are treated equally in the unmixing model, which is inappropriate. Moreover, the determination of the appropriate number of clusters in the known fine spatial resolution image remains a challenge. In this article, a geographically weighted SU (SU-GW) method was proposed to address the spatial variation in land cover and increase the accuracy of spatiotemporal fusion. SU-GW is a general model suitable for any SU method. Specifically, the existing regularized version and soft classification-based version were extended with the proposed geographically weighted scheme, producing 24 versions (i.e., 12 existing versions were extended to 12 corresponding geographically weighted versions) for SU. Furthermore, the cluster validity index of Xie and Beni (XB) was introduced to determine automatically the number of clusters. A systematic comparison between the experimental results of the 24 versions indicated that SU-GW was effective in increasing the prediction accuracy. Importantly, all 12 existing methods were enhanced by integrating the SU-GW scheme. Moreover, the identified most accurate SU-GW enhanced version was demonstrated to outperform two prevailing spatiotemporal fusion approaches in a benchmark comparison. Therefore, it can be concluded that SU-GW provides a general solution for enhancing spatiotemporal fusion, which can be used to update existing methods and future potential versions

    Spatiotemporal Fusion in Remote Sensing

    Get PDF
    Remote sensing images and techniques are powerful tools to investigate earth’s surface. Data quality is the key to enhance remote sensing applications and obtaining clear and noise-free set of data is very difficult in most situations due to the varying acquisition (e.g., atmosphere and season), sensor and platform (e.g., satellite angles and sensor characteristics) conditions. With the increasing development of satellites, nowadays Terabytes of remote sensing images can be acquired every day. Therefore, information and data fusion can be particularly important in the remote sensing community. The fusion integrates data from various sources acquired asynchronously for information extraction, analysis, and quality improvement. In this chapter, we aim to discuss the theory of spatiotemporal fusion by investigating previous works, in addition to describing the basic concepts and some of its applications by summarizing our prior and ongoing works
    corecore