Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation

Abstract

Spatiotemporal satellite image fusion (STIF) has been widely applied in land surface monitoring to generate high spatial and high temporal reflectance images from satellite sensors. This paper proposed a new unmixing-based spatiotemporal fusion method that is composed of a self-trained random forest machine learning regression (R), low resolution (LR) endmember estimation (E), high resolution (HR) surface reflectance image reconstruction (R), and residual compensation (C), that is, RERC. RERC uses a self-trained random forest to train and predict the relationship between spectra and the corresponding class fractions. This process is flexible without any ancillary training dataset, and does not possess the limitations of linear spectral unmixing, which requires the number of endmembers to be no more than the number of spectral bands. The running time of the random forest regression is about ~1% of the running time of the linear mixture model. In addition, RERC adopts a spectral reflectance residual compensation approach to refine the fused image to make full use of the information from the LR image. RERC was assessed in the fusion of a prediction time MODIS with a Landsat image using two benchmark datasets, and was assessed in fusing images with different numbers of spectral bands by fusing a known time Landsat image (seven bands used) with a known time very-high-resolution PlanetScope image (four spectral bands). RERC was assessed in the fusion of MODIS-Landsat imagery in large areas at the national scale for the Republic of Ireland and France. The code is available at https://www.researchgate.net/proiile/Xiao_Li52

    Similar works