4,037 research outputs found

    An improved image fusion approach based on enhanced spatial and temporal the adaptive reflectance fusion model

    No full text
    High spatiotemporal resolution satellite imagery is useful for natural resource management and monitoring for land-use and land-cover change and ecosystem dynamics. However, acquisitions from a single satellite can be limited, due to trade-offs in either spatial or temporal resolution. The spatial and temporal adaptive reflectance fusion model (STARFM) and the enhanced STARFM (ESTARFM) were developed to produce new images with high spatial and high temporal resolution using images from multiple sources. Nonetheless, there were some shortcomings in these models, especially for the procedure of searching spectrally similar neighbor pixels in the models. In order to improve these modelsâ?? capacity and accuracy, we developed a modified version of ESTARFM (mESTARFM) and tested the performance of two approaches (ESTARFM and mESTARFM) in three study areas located in Canada and China at different time intervals. The results show that mESTARFM improved the accuracy of the simulated reflectance at fine resolution to some extent

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    Characterizing Spatiotemporal Patterns of White Mold in Soybean across South Dakota Using Remote Sensing

    Get PDF
    Soybean is among the most important crops, cultivated primarily for beans, which are used for food, feed, and biofuel. According to FAO, the United States was the biggest soybeans producer in 2016. The main soybean producing regions in the United States are the Corn Belt and the lower Mississippi Valley. Despite its importance, soybean production is reduced by several diseases, among which Sclerotinia stem rot, also known as white mold, a fungal disease that is caused by the fungus Sclerotinia sclerotiorum is among the top 10 soybean diseases. The disease may attack several plants and considerably reduce yield. According to previous reports, environmental conditions corresponding to high yield potential are most conducive for white mold development. These conditions include cool temperature (12-24 °C), continued wet and moist conditions (70-120 h) generally resulting from rain, but the disease development requires the presence of a susceptible soybean variety. To better understand white mold development in the field, there is a need to investigate its spatiotemoral characteristics and provide accurate estimates of the damages that white mold may cause. Current and accurate data about white mold are scarce, especially at county or larger scale. Studies that explored the characteristics of white mold were generally field oriented and local in scale, and when the spectral characteristics were investigated, the authors used spectroradiometers that are not accessible to farmers and to the general public and are mostly used for experimental modeling. This study employed free remote sensing Landsat 8 images to quantify white mold in South Dakota. Images acquired in May and July were used to map the land cover and extract the soybean mask, while an image acquired in August was used to map and quantify white mold using the random forest algorithm. The land cover map was produced with an overall accuracy of 95% while white mold was mapped with an overall accuracy of 99%. White mold area estimates were respectively 132 km2, 88 km2, and 190 km2, representing 31%, 22% and 29% of the total soybean area for Marshall, Codington and Day counties. This study also explored the spatial characteristics of white mold in soybean fields and its impact on yield. The yield distribution exhibited a significant positive spatial autocorrelation (Moran’s I = 0.38, p-value \u3c 0.001 for Moody field, Moran’s I = 0.45, p-value \u3c 0.001, for Marshall field) as an evidence of clustering. Significant clusters could be observed in white mold areas (low-low clusters) or in healthy soybeans (high-high clusters). The yield loss caused by the worst white mold was estimated at 36% and 56% respectively for the Moody and the Marshall fields, with the most accurate loss estimation occurring between late August and early September. Finally, this study modeled the temporal evolution of white mold using a logistic regression analysis in which the white mold was modeled as a function of the NDVI. The model was successful, but further improved by the inclusion of the Day of the Year (DOY). The respective areas under the curves (AUC) were 0.95 for NDVI and 0.99 for NDVI+DOY models. A comparison of the NDVI temporal change between different sites showed that white mold temporal development was affected by the site location, which could be influenced by many local parameters such as the soil properties, the local elevation, management practices, or weather parameters. This study showed the importance of freely available remotely sensed satellite images in the estimation of crop disease areas and in the characterization of the spatial and temporal patterns of crop disease; this could help in timely disease damage assessment

    Scaling Effect of Fused ASTER-MODIS Land Surface Temperature in an Urban Environment

    Get PDF
    There is limited research in land surface temperatures (LST) simulation using image fusion techniques, especially studies addressing the downscaling effect of LST image fusion. LST simulation and associated downscaling effect can potentially benefit the thermal studies requiring both high spatial and temporal resolutions. This study simulated LSTs based on observed Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) LST imagery with Spatial and Temporal Adaptive Reflectance Fusion Model, and investigated the downscaling effect of LST image fusion at 15, 30, 60, 90, 120, 250, 500, and 1000 m spatial resolutions. The study area partially covered the City of Los Angeles, California, USA, and surrounding areas. The reference images (observed ASTER and MODIS LST imagery) were acquired on 04/03/2007 and 07/01/2007, with simulated LSTs produced for 4/28/2007. Three image resampling methods (Cubic Convolution, Bilinear Interpolation, and Nearest Neighbor) were used during the downscaling and upscaling processes, and the resulting LST simulations were compared. Results indicated that the observed ASTER LST and simulated ASTER LST images (date 04/28/2007, spatial resolution 90 m) had high agreement in terms of spatial variations and basic statistics based on a comparison between the observed and simulated ASTER LST maps. Urban developed lands possessed higher LSTs with lighter tones and mountainous areas showed dark tones with lower LSTs. The Cubic Convolution and Bilinear Interpolation resampling methods yielded better results over Nearest Neighbor resampling method across the scales from 15 to 1000 m. The simulated LSTs with image fusion can be used as valuable inputs in heat related studies that require frequent LST measurements with fine spatial resolutions, e.g., seasonal movements of urban heat islands, monthly energy budget assessment, and temperature-driven epidemiology. The observation of scale-independency of the proposed image fusion method can facilitate with image selections of LST studies at various locations

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE

    Blending Landsat and MODIS Data to Generate Multispectral Indices: A Comparison of “Index-then-Blend” and “Blend-then-Index” Approaches

    Get PDF
    The objective of this paper was to evaluate the accuracy of two advanced blending algorithms, Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) to downscale Moderate Resolution Imaging Spectroradiometer (MODIS) indices to the spatial resolution of Landsat. We tested two approaches: (i) "Index-then-Blend" (IB); and (ii) "Blend-then-Index" (BI) when simulating nine indices, which are widely used for vegetation studies, environmental moisture assessment and standing water identification. Landsat-like indices, generated using both IB and BI, were simulated on 45 dates in total from three sites. The outputs were then compared with indices calculated from observed Landsat data and pixel-to-pixel accuracy of each simulation was assessed by calculating the: (i) bias; (ii) R; and (iii) Root Mean Square Deviation (RMSD). The IB approach produced higher accuracies than the BI approach for both blending algorithms for all nine indices at all three sites. We also found that the relative performance of the STARFM and ESTARFM algorithms depended on the spatial and temporal variances of the Landsat-MODIS input indices. Our study suggests that the IB approach should be implemented for blending of environmental indices, as it was: (i) less computationally expensive due to blending single indices rather than multiple bands; (ii) more accurate due to less error propagation; and (iii) less sensitive to the choice of algorithm

    A Cross Comparison of Spatiotemporally Enhanced Springtime Phenological Measurements From Satellites and Ground in a Northern U.S. Mixed Forest

    Get PDF
    Cross comparison of satellite-derived land surface phenology (LSP) and ground measurements is useful to ensure the relevance of detected seasonal vegetation change to the underlying biophysical processes. While standard 16-day and 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation index (VI)-based springtime LSP has been evaluated in previous studies, it remains unclear whether LSP with enhanced temporal and spatial resolutions can capture additional details of ground phenology. In this paper, we compared LSP derived from 500-m daily MODIS and 30-m MODIS-Landsat fused VI data with landscape phenology (LP) in a northern U.S. mixed forest. LP was previously developed from intensively observed deciduous and coniferous tree phenology using an upscaling approach. Results showed that daily MODIS-based LSP consistently estimated greenup onset dates at the study area (625 m × 625 m) level with 4.48 days of mean absolute error (MAE), slightly better than that of using 16-day standard VI (4.63 days MAE). For the observed study areas, the time series with increased number of observations confirmed that post-bud burst deciduous tree phenology contributes the most to vegetation reflectance change. Moreover, fused VI time series demonstrated closer correspondences with LP at the community level (0.1-20 ha) than using MODIS alone at the study area level (390 ha). The fused LSP captured greenup onset dates for respective forest communities of varied sizes and compositions with four days of the overall MAE. This study supports further use of spatiotemporally enhanced LSP for more precise phenological monitoring
    corecore