425 research outputs found

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    Reconstruction of Daily 30 m Data from HJ CCD, GF-1 WFV, Landsat, and MODIS Data for Crop Monitoring

    Get PDF
    With the recent launch of new satellites and the developments of spatiotemporal data fusion methods, we are entering an era of high spatiotemporal resolution remote-sensing analysis. This study proposed a method to reconstruct daily 30 m remote-sensing data for monitoring crop types and phenology in two study areas located in Xinjiang Province, China. First, the Spatial and Temporal Data Fusion Approach (STDFA) was used to reconstruct the time series high spatiotemporal resolution data from the Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field-of-view camera (GF-1 WFV), Landsat, and Moderate Resolution Imaging Spectroradiometer (MODIS) data. Then, the reconstructed time series were applied to extract crop phenology using a Hybrid Piecewise Logistic Model (HPLM). In addition, the onset date of greenness increase (OGI) and greenness decrease (OGD) were also calculated using the simulated phenology. Finally, crop types were mapped using the phenology information. The results show that the reconstructed high spatiotemporal data had a high quality with a proportion of good observations (PGQ) higher than 0.95 and the HPLM approach can simulate time series Normalized Different Vegetation Index (NDVI) very well with R2 ranging from 0.635 to 0.952 in Luntai and 0.719 to 0.991 in Bole, respectively. The reconstructed high spatiotemporal data were able to extract crop phenology in single crop fields, which provided a very detailed pattern relative to that from time series MODIS data. Moreover, the crop types can be classified using the reconstructed time series high spatiotemporal data with overall accuracy equal to 0.91 in Luntai and 0.95 in Bole, which is 0.028 and 0.046 higher than those obtained by using multi-temporal Landsat NDVI data

    Classification of C3 and C4 Vegetation Types Using MODIS and ETM+ Blended High Spatio-Temporal Resolution Data

    Get PDF
    The distribution of C3 and C4 vegetation plays an important role in the global carbon cycle and climate change. Knowledge of the distribution of C3 and C4 vegetation at a high spatial resolution over local or regional scales helps us to understand their ecological functions and climate dependencies. In this study, we classified C3 and C4 vegetation at a high resolution for spatially heterogeneous landscapes. First, we generated a high spatial and temporal land surface reflectance dataset by blending MODIS (Moderate Resolution Imaging Spectroradiometer) and ETM+ (Enhanced Thematic Mapper Plus) data. The blended data exhibited a high correlation (R2 = 0.88) with the satellite derived ETM+ data. The time-series NDVI (Normalized Difference Vegetation Index) data were then generated using the blended high spatio-temporal resolution data to capture the phenological differences between the C3 and C4 vegetation. The time-series NDVI revealed that the C3 vegetation turns green earlier in spring than the C4 vegetation, and senesces later in autumn than the C4 vegetation. C4 vegetation has a higher NDVI value than the C3 vegetation during summer time. Based on the distinguished characteristics, the time-series NDVI was used to extract the C3 and C4 classification features. Five features were selected from the 18 classification features according to the ground investigation data, and subsequently used for the C3 and C4 classification. The overall accuracy of the C3 and C4 vegetation classification was 85.75% with a kappa of 0.725 in our study area

    A Cross Comparison of Spatiotemporally Enhanced Springtime Phenological Measurements From Satellites and Ground in a Northern U.S. Mixed Forest

    Get PDF
    Cross comparison of satellite-derived land surface phenology (LSP) and ground measurements is useful to ensure the relevance of detected seasonal vegetation change to the underlying biophysical processes. While standard 16-day and 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation index (VI)-based springtime LSP has been evaluated in previous studies, it remains unclear whether LSP with enhanced temporal and spatial resolutions can capture additional details of ground phenology. In this paper, we compared LSP derived from 500-m daily MODIS and 30-m MODIS-Landsat fused VI data with landscape phenology (LP) in a northern U.S. mixed forest. LP was previously developed from intensively observed deciduous and coniferous tree phenology using an upscaling approach. Results showed that daily MODIS-based LSP consistently estimated greenup onset dates at the study area (625 m × 625 m) level with 4.48 days of mean absolute error (MAE), slightly better than that of using 16-day standard VI (4.63 days MAE). For the observed study areas, the time series with increased number of observations confirmed that post-bud burst deciduous tree phenology contributes the most to vegetation reflectance change. Moreover, fused VI time series demonstrated closer correspondences with LP at the community level (0.1-20 ha) than using MODIS alone at the study area level (390 ha). The fused LSP captured greenup onset dates for respective forest communities of varied sizes and compositions with four days of the overall MAE. This study supports further use of spatiotemporally enhanced LSP for more precise phenological monitoring

    Generation of 100 m, Hourly Land Surface Temperature Based on Spatio-Temporal Fusion

    Get PDF
    Landsat surface temperature (LST) is an important physical quantity for global climate change monitoring. Over the past decades, several LST products have been produced by satellite thermal infrared (TIR) bands or land surface models (LSMs). Recent research has increased the spatio-temporal resolution of LST products to 2 km, hourly based on Geostationary Operational Environmental Satellites (GOES)-R Advanced Baseline Imager (ABI) LST data. The spatial resolution of 2 km, however, is insufficient for monitoring at the regional scale. This paper investigates the feasibility of applying spatio-temporal fusion to generate reliable 100 m, hourly LST data based on fusion of the newly released 2 km, hourly GOES-16 ABI LST and 100 m Landsat LST data. The most accurate fusion method was identified through a comparison between several popular methods. Furthermore, a comprehensive comparison was performed between fusion (with Landsat LST) involving satellite-derived LST (i.e., GOES) and model-derived LSMs (i.e., European Centre for Medium-range Weather Forecasts (ECMWF) Reanalysis v .5 (ERA5)-Land). The spatial and temporal adaptive reflectance fusion model (STARFM) method was demonstrated to be an appropriate method to generate 100 m, hourly data, which produced an average root mean square error (RMSE) of 2.640 K, mean absolute error (MAE) of 2.159 K and average coefficient of determination ( R 2 ) of 0.982 referring to the in situ time-series. Furthermore, inheriting the advantages of direct observation, and the fusion of Landsat and GOES for the generation of 100 m, hourly LST produced greater accuracy compared to the fusion of Landsat and ERA5-Land LST in the experiments. The generated 100 m, hourly LST can provide important diurnal data with fine spatial resolution for various monitoring applications

    Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation

    Get PDF
    Spatiotemporal satellite image fusion (STIF) has been widely applied in land surface monitoring to generate high spatial and high temporal reflectance images from satellite sensors. This paper proposed a new unmixing-based spatiotemporal fusion method that is composed of a self-trained random forest machine learning regression (R), low resolution (LR) endmember estimation (E), high resolution (HR) surface reflectance image reconstruction (R), and residual compensation (C), that is, RERC. RERC uses a self-trained random forest to train and predict the relationship between spectra and the corresponding class fractions. This process is flexible without any ancillary training dataset, and does not possess the limitations of linear spectral unmixing, which requires the number of endmembers to be no more than the number of spectral bands. The running time of the random forest regression is about ~1% of the running time of the linear mixture model. In addition, RERC adopts a spectral reflectance residual compensation approach to refine the fused image to make full use of the information from the LR image. RERC was assessed in the fusion of a prediction time MODIS with a Landsat image using two benchmark datasets, and was assessed in fusing images with different numbers of spectral bands by fusing a known time Landsat image (seven bands used) with a known time very-high-resolution PlanetScope image (four spectral bands). RERC was assessed in the fusion of MODIS-Landsat imagery in large areas at the national scale for the Republic of Ireland and France. The code is available at https://www.researchgate.net/proiile/Xiao_Li52

    SFSDAF: an enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion

    Get PDF
    Spatio-temporal image fusion methods have become a popular means to produce remotely sensed data sets that have both fine spatial and temporal resolution. Accurate prediction of reflectance change is difficult, especially when the change is caused by both phenological change and land cover class changes. Although several spatio-temporal fusion methods such as the Flexible Spatiotemporal DAta Fusion (FSDAF) directly derive land cover phenological change information (such as endmember change) at different dates, the direct derivation of land cover class change information is challenging. In this paper, an enhanced FSDAF that incorporates sub-pixel class fraction change information (SFSDAF) is proposed. By directly deriving the sub-pixel land cover class fraction change information the proposed method allows accurate prediction even for heterogeneous regions that undergo a land cover class change. In particular, SFSDAF directly derives fine spatial resolution endmember change and class fraction change at the date of the observed image pair and the date of prediction, which can help identify image reflectance change resulting from different sources. SFSDAF predicts a fine resolution image at the time of acquisition of coarse resolution images using only one prior coarse and fine resolution image pair, and accommodates variations in reflectance due to both natural fluctuations in class spectral response (e.g. due to phenology) and land cover class change. The method is illustrated using degraded and real images and compared against three established spatio-temporal methods. The results show that the SFSDAF produced the least blurred images and the most accurate predictions of fine resolution reflectance values, especially for regions of heterogeneous landscape and regions that undergo some land cover class change. Consequently, the SFSDAF has considerable potential in monitoring Earth surface dynamics
    corecore