24 research outputs found

    Updating Landsat-based forest cover maps with MODIS images using multiscale spectral-spatial-temporal superresolution mapping

    Get PDF
    Abstract With the high deforestation rates of global forest covers during the past decades, there is an ever-increasing need to monitor forest covers at both fine spatial and temporal resolutions. Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat series images have been used commonly for satellite-derived forest cover mapping. However, the spatial resolution of MODIS images and the temporal resolution of Landsat images are too coarse to observe forest cover at both fine spatial and temporal resolutions. In this paper, a novel multiscale spectral-spatial-temporal superresolution mapping (MSSTSRM) approach is proposed to update Landsat-based forest maps by integrating current MODIS images with the previous forest maps generated from Landsat image. Both the 240 m MODIS bands and 480 m MODIS bands were used as inputs of the spectral energy function of the MSSTSRM model. The principle of maximal spatial dependence was used as the spatial energy function to make the updated forest map spatially smooth. The temporal energy function was based on a multiscale spatial-temporal dependence model, and considers the land cover changes between the previous and current time. The novel MSSTSRM model was able to update Landsat-based forest maps more accurately, in terms of both visual and quantitative evaluation, than traditional pixel-based classification and the latest sub-pixel based super-resolution mapping methods The results demonstrate the great efficiency and potential of MSSTSRM for updating fine temporal resolution Landsat-based forest maps using MODIS images

    Spatial-temporal super-resolution land cover mapping with a local spatial-temporal dependence model

    Get PDF
    The mixed pixel problem is common in remote sensing. A soft classification can generate land cover class fraction images that illustrate the areal proportions of the various land cover classes within pixels. The spatial distribution of land cover classes within each mixed pixel is, however, not represented. Super-resolution land cover mapping (SRM) is a technique to predict the spatial distribution of land cover classes within the mixed pixel using fraction images as input. Spatial-temporal SRM (STSRM) extends the basic SRM to include a temporal dimension by using a finer-spatial resolution land cover map that pre-or postdates the image acquisition time as ancillary data. Traditional STSRM methods often use one land cover map as the constraint, but neglect the majority of available land cover maps acquired at different dates and of the same scene in reconstructing a full state trajectory of land cover changes when applying STSRM to time series data. In addition, the STSRM methods define the temporal dependence globally, and neglect the spatial variation of land cover temporal dependence intensity within images. A novel local STSRM (LSTSRM) is proposed in this paper. LSTSRM incorporates more than one available land cover map to constrain the solution, and develops a local temporal dependence model, in which the temporal dependence intensity may vary spatially. The results show that LSTSRM can eliminate speckle-like artifacts and reconstruct the spatial patterns of land cover patches in the resulting maps, and increase the overall accuracy compared with other STSRM methods

    Mapping annual forest cover by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    Get PDF
    Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band Synthetic Aperture Radar (PALSAR) HH and HV polarization data were used previously to produce annual, global 25 m forest maps between 2007 and 2010, and the latest global forest maps of 2015 and 2016 were produced by using the ALOS-2 PALSAR-2 data. However, annual 25 m spatial resolution forest maps during 2011–2014 are missing because of the gap in operation between ALOS and ALOS-2, preventing the construction of a continuous, fine resolution time-series dataset on the world's forests. In contrast, the MODerate Resolution Imaging Spectroradiometer (MODIS) NDVI images were available globally since 2000. This research developed a novel method to produce annual 25 m forest maps during 2007–2016 by fusing the fine spatial resolution, but asynchronous PALSAR/PALSAR-2 with coarse spatial resolution, but synchronous MODIS NDVI data, thus, filling the four-year gap in the ALOS and ALOS-2 time-series, as well as enhancing the existing mapping activity. The method was developed concentrating on two key objectives: 1) producing more accurate 25 m forest maps by integrating PALSAR/PALSAR-2 and MODIS NDVI data during 2007–2010 and 2015–2016; 2) reconstructing annual 25 m forest maps from time-series MODIS NDVI images during 2011–2014. Specifically, a decision tree classification was developed for forest mapping based on both the PALSAR/PALSAR-2 and MODIS NDVI data, and a new spatial-temporal super-resolution mapping was proposed to reconstruct the 25 m forest maps from time-series MODIS NDVI images. Three study sites including Paraguay, the USA and Russia were chosen, as they represent the world's three main forest types: tropical forest, temperate broadleaf and mixed forest, and boreal conifer forest, respectively. Compared with traditional methods, the proposed approach produced the most accurate continuous time-series of fine spatial resolution forest maps both visually and quantitatively. For the forest maps during 2007–2010 and 2015–2016, the results had greater overall accuracy values (>98%) than those of the original JAXA forest product. For the reconstructed 25 m forest maps during 2011–2014, the increases in classifications accuracy relative to three benchmark methods were statistically significant, and the overall accuracy values of the three study sites were almost universally >92%. The proposed approach, therefore, has great potential to support the production of annual 25 m forest maps by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    A superresolution land-cover change detection method using remotely sensed images with different spatial resolutions

    Get PDF
    The development of remote sensing has enabled the acquisition of information on land-cover change at different spatial scales. However, a trade-off between spatial and temporal resolutions normally exists. Fine-spatial-resolution images have low temporal resolutions, whereas coarse spatial resolution images have high temporal repetition rates. A novel super-resolution change detection method (SRCD)is proposed to detect land-cover changes at both fine spatial and temporal resolutions with the use of a coarse-resolution image and a fine-resolution land-cover map acquired at different times. SRCD is an iterative method that involves endmember estimation, spectral unmixing, land-cover fraction change detection, and super-resolution land-cover mapping. Both the land-cover change/no-change map and from–to change map at fine spatial resolution can be generated by SRCD. In this study, SRCD was applied to synthetic multispectral image, Moderate-Resolution Imaging Spectroradiometer (MODIS) multispectral image and Landsat-8 Operational Land Imager (OLI) multispectral image. The land-cover from–to change maps are found to have the highest overall accuracy (higher than 85%) in all the three experiments. Most of the changed land-cover patches, which were larger than the coarse-resolution pixel, were correctly detected

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Spatial-temporal fraction map fusion with multi-scale remotely sensed images

    Get PDF
    Given the common trade-off between the spatial and temporal resolutions of current satellite sensors, spatial-temporal data fusion methods could be applied to produce fused remotely sensed data with synthetic fine spatial resolution (FR) and high repeat frequency. Such fused data are required to provide a comprehensive understanding of Earth's surface land cover dynamics. In this research, a novel Spatial-Temporal Fraction Map Fusion (STFMF) model is proposed to produce a series of fine-spatial-temporal-resolution land cover fraction maps by fusing coarse-spatial-fine-temporal and fine-spatial-coarse-temporal fraction maps, which may be generated from multi-scale remotely sensed images. The STFMF has two main stages. First, FR fraction change maps are generated using kernel ridge regression. Second, a FR fraction map for the date of prediction is predicted using a temporal-weighted fusion model. In comparison to two established spatial-temporal fusion methods of spatial-temporal super-resolution land cover mapping model and spatial-temporal image reflectance fusion model, STFMF holds the following characteristics and advantages: (1) it takes account of the mixed pixel problem in FR remotely sensed images; (2) it directly uses the fraction maps as input, which could be generated from a range of satellite images or other suitable data sources; (3) it focuses on the estimation of fraction changes happened through time and can predict the land cover change more accurately. Experiments using synthetic multi-scale fraction maps simulated from Google Earth images, as well as synthetic and real MODIS-Landsat images were undertaken to test the performance of the proposed STFMF approach against two benchmark spatial-temporal reflectance fusion methods: the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal Data Fusion (FSDAF) model. In both visual and quantitative evaluations, STFMF was able to generate more accurate FR fraction maps and provide more spatial detail than ESTARFM and FSDAF, particularly in areas with substantial land cover changes. STFMF has great potential to produce accurate time-series fraction maps with fine-spatial-temporal-resolution that can support studies of land cover dynamics at the sub-pixel scale

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

    LINEAR SPECTRAL MIXING MODEL APPLIED IN IMAGES FROM PROBA-V SENSOR: A SPATIAL MULTIRESOLUTION APPROACH

    Get PDF
    The complexity of pixel composition of orbital images has been commonly referred to the spectral mixture problem. The acquisition of endmembers (pure pixels) direct from image under study is one of the most commonly employed approaches. However, it becomes limited in low or moderate spatial resolutions due to the lower probability of finding those pixels. In this way, this work proposes the combined use of images with different spatial resolutions to estimate the spectral responses of the endmembers in low spatial resolution image, from the obtained proportions derived from the spatial higher-resolution images. The proposed methodology was applied to products provided by PROBA-V satellite with spatial resolution of 100 m and 1 km in the Pantanal region of Mato Grosso state. Initially, the fraction images (proportions) were generated from the 100 m dataset using the endmembers selected directly in the image, considering the higher probability of finding pure pixels in such images. Following the spectral responses of the endmembers in 1 km were estimated by multiple linear regression, using the proportions of the endmembers in the pixels derived from 100 m images. For the evaluation, the endmembers fraction images were compared and field data was used. These analyses indicated that the spectral responses estimated allowed to improve the results with regard to error, to variability, and to the identification of endmembers proportions, considering that inadequate choice of pixels considered as pure in low spatial resolution images can affect the quality of the fraction images for operational use

    Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability

    Full text link
    Image fusion combines data from different heterogeneous sources to obtain more precise information about an underlying scene. Hyperspectral-multispectral (HS-MS) image fusion is currently attracting great interest in remote sensing since it allows the generation of high spatial resolution HS images, circumventing the main limitation of this imaging modality. Existing HS-MS fusion algorithms, however, neglect the spectral variability often existing between images acquired at different time instants. This time difference causes variations in spectral signatures of the underlying constituent materials due to different acquisition and seasonal conditions. This paper introduces a novel HS-MS image fusion strategy that combines an unmixing-based formulation with an explicit parametric model for typical spectral variability between the two images. Simulations with synthetic and real data show that the proposed strategy leads to a significant performance improvement under spectral variability and state-of-the-art performance otherwise
    corecore