3,519 research outputs found

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Crop monitoring and yield estimation using polarimetric SAR and optical satellite data in southwestern Ontario

    Get PDF
    Optical satellite data have been proven as an efficient source to extract crop information and monitor crop growth conditions over large areas. In local- to subfield-scale crop monitoring studies, both high spatial resolution and high temporal resolution of the image data are important. However, the acquisition of optical data is limited by the constant contamination of clouds in cloudy areas. This thesis explores the potential of polarimetric Synthetic Aperture Radar (SAR) satellite data and the spatio-temporal data fusion approach in crop monitoring and yield estimation applications in southwestern Ontario. Firstly, the sensitivity of 16 parameters derived from C-band Radarsat-2 polarimetric SAR data to crop height and fractional vegetation cover (FVC) was investigated. The results show that the SAR backscatters are affected by many factors unrelated to the crop canopy such as the incidence angle and the soil background and the degree of sensitivity varies with the crop types, growing stages, and the polarimetric SAR parameters. Secondly, the Minimum Noise Fraction (MNF) transformation, for the first time, was applied to multitemporal Radarsat-2 polarimetric SAR data in cropland area mapping based on the random forest classifier. An overall classification accuracy of 95.89% was achieved using the MNF transformation of the multi-temporal coherency matrix acquired from July to November. Then, a spatio-temporal data fusion method was developed to generate Normalized Difference Vegetation Index (NDVI) time series with both high spatial and high temporal resolution in heterogeneous regions using Landsat and MODIS imagery. The proposed method outperforms two other widely used methods. Finally, an improved crop phenology detection method was proposed, and the phenology information was then forced into the Simple Algorithm for Yield Estimation (SAFY) model to estimate crop biomass and yield. Compared with the SAFY model without forcing the remotely sensed phenology and a simple light use efficiency (LUE) model, the SAFY incorporating the remotely sensed phenology can improve the accuracy of biomass estimation by about 4% in relative Root Mean Square Error (RRMSE). The studies in this thesis improve the ability to monitor crop growth status and production at subfield scale

    An improved image fusion approach based on enhanced spatial and temporal the adaptive reflectance fusion model

    No full text
    High spatiotemporal resolution satellite imagery is useful for natural resource management and monitoring for land-use and land-cover change and ecosystem dynamics. However, acquisitions from a single satellite can be limited, due to trade-offs in either spatial or temporal resolution. The spatial and temporal adaptive reflectance fusion model (STARFM) and the enhanced STARFM (ESTARFM) were developed to produce new images with high spatial and high temporal resolution using images from multiple sources. Nonetheless, there were some shortcomings in these models, especially for the procedure of searching spectrally similar neighbor pixels in the models. In order to improve these modelsâ?? capacity and accuracy, we developed a modified version of ESTARFM (mESTARFM) and tested the performance of two approaches (ESTARFM and mESTARFM) in three study areas located in Canada and China at different time intervals. The results show that mESTARFM improved the accuracy of the simulated reflectance at fine resolution to some extent

    Mapping forests in monsoon Asia with ALOS PALSAR 50-m mosaic images and MODIS imagery in 2010.

    Get PDF
    Extensive forest changes have occurred in monsoon Asia, substantially affecting climate, carbon cycle and biodiversity. Accurate forest cover maps at fine spatial resolutions are required to qualify and quantify these effects. In this study, an algorithm was developed to map forests in 2010, with the use of structure and biomass information from the Advanced Land Observation System (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) mosaic dataset and the phenological information from MODerate Resolution Imaging Spectroradiometer (MOD13Q1 and MOD09A1) products. Our forest map (PALSARMOD50 m F/NF) was assessed through randomly selected ground truth samples from high spatial resolution images and had an overall accuracy of 95%. Total area of forests in monsoon Asia in 2010 was estimated to be ~6.3 × 10(6 )km(2). The distribution of evergreen and deciduous forests agreed reasonably well with the median Normalized Difference Vegetation Index (NDVI) in winter. PALSARMOD50 m F/NF map showed good spatial and areal agreements with selected forest maps generated by the Japan Aerospace Exploration Agency (JAXA F/NF), European Space Agency (ESA F/NF), Boston University (MCD12Q1 F/NF), Food and Agricultural Organization (FAO FRA), and University of Maryland (Landsat forests), but relatively large differences and uncertainties in tropical forests and evergreen and deciduous forests

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    Reconstruction of Daily 30 m Data from HJ CCD, GF-1 WFV, Landsat, and MODIS Data for Crop Monitoring

    Get PDF
    With the recent launch of new satellites and the developments of spatiotemporal data fusion methods, we are entering an era of high spatiotemporal resolution remote-sensing analysis. This study proposed a method to reconstruct daily 30 m remote-sensing data for monitoring crop types and phenology in two study areas located in Xinjiang Province, China. First, the Spatial and Temporal Data Fusion Approach (STDFA) was used to reconstruct the time series high spatiotemporal resolution data from the Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field-of-view camera (GF-1 WFV), Landsat, and Moderate Resolution Imaging Spectroradiometer (MODIS) data. Then, the reconstructed time series were applied to extract crop phenology using a Hybrid Piecewise Logistic Model (HPLM). In addition, the onset date of greenness increase (OGI) and greenness decrease (OGD) were also calculated using the simulated phenology. Finally, crop types were mapped using the phenology information. The results show that the reconstructed high spatiotemporal data had a high quality with a proportion of good observations (PGQ) higher than 0.95 and the HPLM approach can simulate time series Normalized Different Vegetation Index (NDVI) very well with R2 ranging from 0.635 to 0.952 in Luntai and 0.719 to 0.991 in Bole, respectively. The reconstructed high spatiotemporal data were able to extract crop phenology in single crop fields, which provided a very detailed pattern relative to that from time series MODIS data. Moreover, the crop types can be classified using the reconstructed time series high spatiotemporal data with overall accuracy equal to 0.91 in Luntai and 0.95 in Bole, which is 0.028 and 0.046 higher than those obtained by using multi-temporal Landsat NDVI data

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Utilizing Satellite Fusion Methods to Assess Vegetation Phenology in a Semi-Arid Ecosystem

    Get PDF
    Dryland ecosystems cover over 40% of the Earth’s surface, and are highly heterogeneous systems dependent upon rainfall and temperature. Climate change and anthropogenic activities have caused considerable shifts in vegetation and fire regimes, leading to desertification, habitat loss, and the spread of invasive species. Modern public satellite imagery is unable to detect fine temporal and spatial changes that occur in drylands. These ecosystems can have rapid phenological changes, and the heterogeneity of the ground cover is unable to be identified at course pixel sizes (e.g. 250 m). We develop a system that uses data from multiple satellites to model finer data to detect phenology in a semi-arid ecosystem, a dryland ecosystem type. The first study in this thesis uses recent developments in readily available satellite imagery, coupled with new systems for large-scale data analysis. Google Earth Engine is used with the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to create high resolution imagery from Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS). The 250 m daily MODIS data are downscaled using the 16-day, 30 m Landsat imagery resulting in daily, 30 m data. The downscaled images are used to observe vegetation phenology over the semi-arid region of the Morley Nelson Snake River Birds of Prey National Conservation Area in Southwestern Idaho, USA. We found the fused satellite imagery has a high accuracy, with R2 ranging from 0.73 to 0.99, when comparing fusion products to the true Landsat imagery. From these data, we observed the phenology of native and invasive vegetation, which can help scientists develop models and classifications of this ecosystem. The second study in this thesis builds upon the fused satellite imagery to understand pre-and post-fire vegetation response in the same ecosystem. We investigate the phenology of five areas that burned in 2012 by using the fusion imagery (daily) to derive the normalized difference vegetation index (NDVI, a measure of vegetation greenness) in areas dominated by grass (n=4) and shrub (n=1). The five areas also had a range of historical burns before 2012, and overall we investigated the phenology of these areas over a decade. This proof of concept resulted in observations of the relationship between the timing of fire and the vegetation greenness recovery. For example, we found that early and late season fires take the longest amount of time for vegetation greenness to recover, and that the number of historical fires has little impact in the vegetation greenness response if it has already burned once, and is a grass-dominated region. The greenness dynamics of the shrub-dominated study site provides insight into the potential to monitor post-fire invasion by nonnative grasses. Ultimately the systems developed in this thesis can be used to monitor semi-arid ecosystems over long-time periods at high spatial and temporal resolution
    corecore