73 research outputs found

    Assessment of Multi-Temporal Image Fusion for Remote Sensing Application

    Get PDF
    Image fusion and subsequent scene analysis are important for studying Earth surface conditions from remotely sensed imagery. The fusion of the same scene using satellite data taken with different sensors or acquisition times is known as multi-sensor or multi-temporal fusion, respectively. The purpose of this study is to investigate the effects of misalignments the multi-sensor, multi-temporal fusion process when a pan-sharpened scene is produced from low spatial resolution multispectral (MS) images and a high spatial resolution panchromatic (PAN) image. It is found that the component substitution (CS) fusion method provides better performance than the multi-resolution analysis (MRA) scheme. Quantitative analysis shows that the CS-based method gives a better result in terms of spatial quality (sharpness), whereas the MRA-based method yields better spectral quality, i.e., better color fidelity to the original MS images

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    ArithFusion: An Arithmetic Deep Model for Temporal Remote Sensing Image Fusion

    Get PDF
    Different satellite images may consist of variable numbers of channels which have different resolutions, and each satellite has a unique revisit period. For example, the Landsat-8 satellite images have 30 m resolution in their multispectral channels, the Sentinel-2 satellite images have 10 m resolution in the pan-sharp channel, and the National Agriculture Imagery Program (NAIP) aerial images have 1 m resolution. In this study, we propose a simple yet effective arithmetic deep model for multimodal temporal remote sensing image fusion. The proposed model takes both low- and high-resolution remote sensing images at t1 together with low-resolution images at a future time t2 from the same location as inputs and fuses them to generate high-resolution images for the same location at t2. We propose an arithmetic operation applied to the low-resolution images at the two time points in feature space to take care of temporal changes. We evaluated the proposed model on three modality pairs for multimodal temporal image fusion, including downsampled WorldView-2/original WorldView-2, Landsat-8/Sentinel-2, and Sentinel-2/NAIP. Experimental results show that our model outperforms traditional algorithms and recent deep learning-based models by large margins in most scenarios, achieving sharp fused images while appropriately addressing temporal changes

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Image Fusion: Pengujian Terhadap Penggabungan Citra Satelit Himawari-8 Dan Spot Untuk Pemantauan Ketinggian Permukaan Air Laut

    Get PDF
    Abstract - High spatial and temporal resolutions of satellite imagery are necessary to monitor rapid environment changes at finer scales. However, no single satellite can produce images with both high spatial and temporal resolutions yet. To address this issue, spatio-temporal image fusion algorithms were proposed to synthesize high spatial and temporal resolution images. For example, Landsat 8 with a spatial resolution of 30 m has been applied on water level detection, but it cannot capture dynamic events due to its low temporal resolution. On the other hand, The Advanced Himawari Imager (AHI) 8 only needs 10 minutes to watch the hemisphere once, but its coarse spatial resolution hampers the accurate mapping of sea level change. While our previous study has examined the feasibility of blending AHI and Landsat images, this study aims at blending SPOT imagery with AHI imagery to monitor the dynamic and local behavior of sea level changes. To be specific, first, images in the testing area are calibrated to surface reflectance and co-registered. The Normalized Difference Water Index (NDWI) is then calculated from SPOT and Himawari-8 images to be an input for the image fusion process. This study applies the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) as the image fusion method. While water level changes dynamically, traditional methods are largely affected by the changes of land cover. Hence, this study constructs a knowledge database to select proper land cover maps as an image fusion input. Finally, the evaluation result shows that the proposed solution can retrieve accurate water coverage with high spatial and temporal resolutions.Keywords - Spatial-temporal image fusion, STARFM, Himawari-8, SPOT, sea level monitoring Abstrak - Resolusi spasial dan temporal yang tinggi dari citra satelit diperlukan untuk memantau perubahan lingkungan yang cepat pada skala yang lebih baik. Namun, belum ada satupun satelit yang dapat menghasilkan gambar dengan resolusi spasial dan temporal yang tinggi. Untuk mengatasi masalah ini, proses penggabungan citra (image fusion) diaplikasikan untuk mensintesis citra dengan resolusi spasial dan temporal yang tinggi. Misalnya, Landsat 8 dengan resolusi spasial 30 m telah diterapkan pada deteksi ketinggian air, tetapi tidak dapat menangkap peristiwa dinamis karena resolusi temporal yang rendah. Di sisi lain, The Advanced Himawari Imager (AHI) 8 hanya membutuhkan waktu 10 menit untuk mengamati seluruh bumi dalam sekali orbit, namun resolusi spasialnya yang buruk dapat menghambat pemetaan perubahan permukaan air laut. Sementara studi sebelumnya telah menguji kelayakan untuk penggabungan citra AHI dan Landsat, maka studi ini bertujuan untuk menguji penggabungan citra SPOT dengan citra AHI untuk memantau dinamika dan perubahan permukaan air laut. Untuk lebih spesifik, pertama, citra di area studi dikalibrasi ke nilai surface reflectance dan kemudian co-registered. Normalized Difference Water Index (NDWI) kemudian dihitung dari citra SPOT dan Himawari-8 untuk dijadikan input saat proses penggabungan citra. Penelitian ini menggunakan Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) sebagai metode penggabungan citra. Sementara ketinggian air berubah secara dinamis, metode tradisional yang sudah ada sebagian besar dipengaruhi oleh perubahan tutupan lahan. Oleh karena itu, penelitian ini membangun database untuk memilih peta tutupan lahan yang tepat sebagai input penggabungan citra. Akhirnya, hasil evaluasi dari pengujian solusi yang diusulkan dapat memperoleh lahan air yang akurat dengan resolusi spasial dan temporal yang tinggi.Kata Kunci - Penggabungan citra spasial-temporal, STARFM, Himawari-8, SPOT, pemantauan ketinggian air lau

    SFSDAF: an enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion

    Get PDF
    Spatio-temporal image fusion methods have become a popular means to produce remotely sensed data sets that have both fine spatial and temporal resolution. Accurate prediction of reflectance change is difficult, especially when the change is caused by both phenological change and land cover class changes. Although several spatio-temporal fusion methods such as the Flexible Spatiotemporal DAta Fusion (FSDAF) directly derive land cover phenological change information (such as endmember change) at different dates, the direct derivation of land cover class change information is challenging. In this paper, an enhanced FSDAF that incorporates sub-pixel class fraction change information (SFSDAF) is proposed. By directly deriving the sub-pixel land cover class fraction change information the proposed method allows accurate prediction even for heterogeneous regions that undergo a land cover class change. In particular, SFSDAF directly derives fine spatial resolution endmember change and class fraction change at the date of the observed image pair and the date of prediction, which can help identify image reflectance change resulting from different sources. SFSDAF predicts a fine resolution image at the time of acquisition of coarse resolution images using only one prior coarse and fine resolution image pair, and accommodates variations in reflectance due to both natural fluctuations in class spectral response (e.g. due to phenology) and land cover class change. The method is illustrated using degraded and real images and compared against three established spatio-temporal methods. The results show that the SFSDAF produced the least blurred images and the most accurate predictions of fine resolution reflectance values, especially for regions of heterogeneous landscape and regions that undergo some land cover class change. Consequently, the SFSDAF has considerable potential in monitoring Earth surface dynamics

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
    corecore