233 research outputs found

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE

    Fusion of Sentinel-2 images

    Get PDF
    Sentinel-2 is a very new programme of the European Space Agency (ESA) that is designed for fine spatial resolution global monitoring. Sentinel-2 images provide four 10 m bands and six 20 m bands. To provide more explicit spatial information, this paper aims to downscale the six 20 m bands to 10 m spatial resolution using the four directly observed 10 m bands. The outcome of this fusion task is the production of 10 Sentinel-2 bands with 10 m spatial resolution. This new fusion problem involves four fine spatial resolution bands, which is different to, and more complex than, the common pan-sharpening fusion problem which involves only one fine band. To address this, we extend the existing two main families of image fusion approaches (i.e., component substitution, CS, and multiresolution analysis, MRA) with two different schemes, a band synthesis scheme and a band selection scheme. Moreover, the recently developed area-to-point regression kriging (ATPRK) approach was also developed and applied for the Sentinel-2 fusion task. Using two Sentinel-2 datasets released online, the three types of approaches (eight CS and MRA-based approaches, and ATPRK) were compared comprehensively in terms of their accuracies to provide recommendations for the task of fusion of Sentinel-2 images. The downscaled ten-band 10 m Sentinel-2 datasets represent important and promising products for a wide range of applications in remote sensing. They also have potential for blending with the upcoming Sentinel-3 data for fine spatio-temporal resolution monitoring at the global scale

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Object-Based Area-to-Point Regression Kriging for Pansharpening

    Get PDF
    Optical earth observation satellite sensors often provide a coarse spatial resolution (CR) multispectral (MS) image together with a fine spatial resolution (FR) panchromatic (PAN) image. Pansharpening is a technique applied to such satellite sensor images to generate an FR MS image by injecting spatial detail taken from the FR PAN image while simultaneously preserving the spectral information of MS image. Pansharpening methods are mostly applied on a per-pixel basis and use the PAN image to extract spatial detail. However, many land cover objects in FR satellite sensor images are not illustrated as independent pixels, but as many spatially aggregated pixels that contain important semantic information. In this article, an object-based pansharpening approach, termed object-based area-to-point regression kriging (OATPRK), is proposed. OATPRK aims to fuse the MS and PAN images at the object-based scale and, thus, takes advantage of both the unified spectral information within the CR MS images and the spatial detail of the FR PAN image. OATPRK is composed of three stages: image segmentation, object-based regression, and residual downscaling. Three data sets acquired from IKONOS and Worldview-2 and 11 benchmark pansharpening algorithms were used to provide a comprehensive assessment of the proposed OATPRK approach. In both the synthetic and real experiments, OATPRK produced the most superior pan-sharpened results in terms of visual and quantitative assessment. OATPRK is a new conceptual method that advances the pixel-level geostatistical pansharpening approach to the object level and provides more accurate pan-sharpened MS images. IEE

    Information Loss-Guided Multi-Resolution Image Fusion

    Get PDF
    Spatial downscaling is an ill-posed, inverse problem, and information loss (IL) inevitably exists in the predictions produced by any downscaling technique. The recently popularized area-to-point kriging (ATPK)-based downscaling approach can account for the size of support and the point spread function (PSF) of the sensor, and moreover, it has the appealing advantage of the perfect coherence property. In this article, based on the advantages of ATPK and the conceptualization of IL, an IL-guided image fusion (ILGIF) approach is proposed. ILGIF uses the fine spatial resolution images acquired in other wavelengths to predict the IL in ATPK predictions based on the geographically weighted regression (GWR) model, which accounts for the spatial variation in land cover. ILGIF inherits all the advantages of ATPK, and its prediction has perfect coherence with the original coarse spatial resolution data which can be demonstrated mathematically. ILGIF was validated using two data sets and was shown in each case to predict downscaled images more accurately than the compared benchmark methods

    Enhancing Spatio-Temporal Fusion of MODIS and Landsat Data by Incorporating 250 m MODIS Data

    Get PDF
    Spatio-temporal fusion of MODIS and Landsat data aims to produce new data that have simultaneously the Landsat spatial resolution and MODIS temporal resolution. It is an ill-posed problem involving large uncertainty, especially for reproduction of abrupt changes and heterogeneous landscapes. In this paper, we proposed to incorporate the freely available 250 m MODIS images into spatio-temporal fusion to increase prediction accuracy. The 250 m MODIS bands 1 and 2 are fused with 500 m MODIS bands 3-7 using the advanced area-to-point regression kriging approach. Based on a standard spatio-temporal fusion approach, the interim 250 m fused MODIS data are then downscaled to 30 m with the aid of the available 30 m Landsat data on temporally close days. The 250 m data can provide more information for the abrupt changes and heterogeneous landscapes than the original 500 m MODIS data, thus increasing the accuracy of spatio-temporal fusion predictions. The effectiveness of the proposed scheme was demonstrated using two datasets

    Spatiotemporal subpixel mapping of time-series images

    Get PDF
    Land cover/land use (LCLU) information extraction from multitemporal sequences of remote sensing imagery is becoming increasingly important. Mixed pixels are a common problem in Landsat and MODIS images that are used widely for LCLU monitoring. Recently developed subpixel mapping (SPM) techniques can extract LCLU information at the subpixel level by dividing mixed pixels into subpixels to which hard classes are then allocated. However, SPM has rarely been studied for time-series images (TSIs). In this paper, a spatiotemporal SPM approach was proposed for SPM of TSIs. In contrast to conventional spatial dependence-based SPM methods, the proposed approach considers simultaneously spatial and temporal dependences, with the former considering the correlation of subpixel classes within each image and the latter considering the correlation of subpixel classes between images in a temporal sequence. The proposed approach was developed assuming the availability of one fine spatial resolution map which exists among the TSIs. The SPM of TSIs is formulated as a constrained optimization problem. Under the coherence constraint imposed by the coarse LCLU proportions, the objective is to maximize the spatiotemporal dependence, which is defined by blending both spatial and temporal dependences. Experiments on three data sets showed that the proposed approach can provide more accurate subpixel resolution TSIs than conventional SPM methods. The SPM results obtained from the TSIs provide an excellent opportunity for LCLU dynamic monitoring and change detection at a finer spatial resolution than the available coarse spatial resolution TSIs

    Fusion of Landsat 8 OLI and Sentinel-2 MSI data

    Get PDF
    Sentinel-2 is a wide-swath and fine spatial resolution satellite imaging mission designed for data continuity and enhancement of the Landsat and other missions. The Sentinel-2 data are freely available at the global scale, and have similar wavelengths and the same geographic coordinate system as the Landsat data, which provides an excellent opportunity to fuse these two types of satellite sensor data together. In this paper, a new approach is presented for the fusion of Landsat 8 Operational Land Imager and Sentinel-2 Multispectral Imager data to coordinate their spatial resolutions for continuous global monitoring. The 30 m spatial resolution Landsat 8 bands are downscaled to 10 m using available 10 m Sentinel-2 bands. To account for the land-cover/land-use (LCLU) changes that may have occurred between the Landsat 8 and Sentinel-2 images, the Landsat 8 panchromatic (PAN) band was also incorporated in the fusion process. The experimental results showed that the proposed approach is effective for fusing Landsat 8 with Sentinel-2 data, and the use of the PAN band can decrease the errors introduced by LCLU changes. By fusion of Landsat 8 and Sentinel-2 data, more frequent observations can be produced for continuous monitoring (this is particularly valuable for areas that can be covered easily by clouds, thereby, contaminating some Landsat or Sentinel-2 observations), and the observations are at a consistent fine spatial resolution of 10 m. The products have great potential for timely monitoring of rapid changes

    Utilizing neural networks for image downscaling and water quality monitoring

    Get PDF
    Remotely sensed images are becoming highly required for various applications, especially those related to natural resource management. The Moderate Resolution Imaging Spectroradiometer (MODIS) data has the advantages of its high spectral and temporal resolutions but remains inadequate in providing the required high spatial resolution. On the other hand, Sentinel-2 is more advantageous in spatial and temporal resolution but lacks a solid historical database. In this study, four MODIS bands in the visible and near-infrared spectral regions of the electromagnetic spectrum and their matching Sentinel-2 bands were used to monitor the turbidity in Lake Nasser, Egypt. The MODIS data were downscaled to Sentinel-2, which enhanced its spatial resolution from 250 and 500m to 10m.Furthermore, it provided a historical database that was used to monitor the changes in lake turbidity. Spatial approach based on neural networks was presented to downscale MODIS bands to the spatial resolution of the Sentinel-2 bands. The correlation coefficient between the predicted and actual images exceeded 0.70 for the four bands. Applying this approach, the downscaled MODIS images were developed and the neural networks were further employed to these images to develop a model for predicting the turbidity in the lake. The correlation coefficient between the predicted and actual measurements reached 0.83. The study suggests neural networks as a comparatively simplified and accurate method for image downscaling compared to other methods. It also demonstrated the possibility of utilizing neural networks to accurately predict lake water quality parameters such as turbidity from remote sensing data compared to statistical methods
    corecore