500 research outputs found
Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation
In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications
Blending Landsat and MODIS Data to Generate Multispectral Indices: A Comparison of “Index-then-Blend” and “Blend-then-Index” Approaches
The objective of this paper was to evaluate the accuracy of two advanced blending algorithms, Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) to downscale Moderate Resolution Imaging Spectroradiometer (MODIS) indices to the spatial resolution of Landsat. We tested two approaches: (i) "Index-then-Blend" (IB); and (ii) "Blend-then-Index" (BI) when simulating nine indices, which are widely used for vegetation studies, environmental moisture assessment and standing water identification. Landsat-like indices, generated using both IB and BI, were simulated on 45 dates in total from three sites. The outputs were then compared with indices calculated from observed Landsat data and pixel-to-pixel accuracy of each simulation was assessed by calculating the: (i) bias; (ii) R; and (iii) Root Mean Square Deviation (RMSD). The IB approach produced higher accuracies than the BI approach for both blending algorithms for all nine indices at all three sites. We also found that the relative performance of the STARFM and ESTARFM algorithms depended on the spatial and temporal variances of the Landsat-MODIS input indices. Our study suggests that the IB approach should be implemented for blending of environmental indices, as it was: (i) less computationally expensive due to blending single indices rather than multiple bands; (ii) more accurate due to less error propagation; and (iii) less sensitive to the choice of algorithm
An improved image fusion approach based on enhanced spatial and temporal the adaptive reflectance fusion model
High spatiotemporal resolution satellite imagery is useful for natural resource management and monitoring for land-use and land-cover change and ecosystem dynamics. However, acquisitions from a single satellite can be limited, due to trade-offs in either spatial or temporal resolution. The spatial and temporal adaptive reflectance fusion model (STARFM) and the enhanced STARFM (ESTARFM) were developed to produce new images with high spatial and high temporal resolution using images from multiple sources. Nonetheless, there were some shortcomings in these models, especially for the procedure of searching spectrally similar neighbor pixels in the models. In order to improve these modelsâ?? capacity and accuracy, we developed a modified version of ESTARFM (mESTARFM) and tested the performance of two approaches (ESTARFM and mESTARFM) in three study areas located in Canada and China at different time intervals. The results show that mESTARFM improved the accuracy of the simulated reflectance at fine resolution to some extent
Desarrollo de un algoritmo para fusionar imágenes en sensado remoto
En ésta tesis una novedosa aproximación para fusionar imágenes de los sensores OLI (satélite LANDSAT 8) y MODIS (satéliteAqua/Terra) es presentada. La presente propuesta emplea las diferencias espectrales entre dos imágenes MODIS, en fechas t1 y t2, y una imagen OLI en la fecha t1 para predecir por medio del uso de una versión modificada del algoritmo de pirámides la imagen OLI en la fecha t2. El resultado predicho mostró una buena apariencia visual y por medio de un análisis de error cuantitativo empleando distintas métricas se determinó que la presente metodología es una buena aproximación para fusionar imágenes OLI y MODIS
A Simple Fusion Mehtod for Image Time Series Based on the Estimation of Image Temporal Validity
High-spatial-resolution satellites usually have the constraint of a low temporal frequency, which leads to long periods without information in cloudy areas. Furthermore, low-spatial-resolution satellites have higher revisit cycles. Combining information from high- and low- spatial-resolution satellites is thought a key factor for studies that require dense time series of high-resolution images, e.g., crop monitoring. There are several fusion methods in the bibliography, but they are time-consuming and complicated to implement. Moreover, the local evaluation of the fused images is rarely analyzed. In this paper, we present a simple and fast fusion method based on a weighted average of two input images (H and L ), which are weighted by their temporal validity to the image to be fused. The method was applied to two years (2009-2010) of Landsat and MODIS (MODerate Imaging Spectroradiometer) images that were acquired over a cropped area in Brazil. The fusion method was evaluated at global and local scales. The results show that the fused images reproduced reliable crop temporal profiles and correctly delineated th e boundaries between two neighboring fields. The great est advantages of the proposed method are the execution time and ease of use, which allow us to obtain a fused image in less than five minutes
Monitoring Land Surface Albedo and Vegetation Dynamics Using High Spatial and Temporal Resolution Synthetic Time Series from Landsat and the MODIS BRDF/NBAR/Albedo Product
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warmingcooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500-meter Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF (Bidirectional Reflectance Distribution Function) / NBAR (Nadir BRDF-Adjusted Reflectance) / albedo products and 30-meter Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDFAlbedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30-meter Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30-meter albedos for the intervening daily time steps in this study. These enhanced daily 30-meter spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of 0.006. These synthetic time series provide much greater spatial detail than the 500 meter gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 kilometers by 14 kilometers) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF-Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30-meter resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces
Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps
Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods
- …
