173,586 research outputs found

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Efficient deep CNNs for cross-modal automated computer vision under time and space constraints

    Get PDF
    We present an automated computer vision architecture to handle video and image data using the same backbone networks. We show empirical results that lead us to adopt MOBILENETV2 as this backbone architecture. The paper demonstrates that neural architectures are transferable from images to videos through suitable preprocessing and temporal information fusion

    Image Fusion: Pengujian Terhadap Penggabungan Citra Satelit Himawari-8 Dan Spot Untuk Pemantauan Ketinggian Permukaan Air Laut

    Get PDF
    Abstract - High spatial and temporal resolutions of satellite imagery are necessary to monitor rapid environment changes at finer scales. However, no single satellite can produce images with both high spatial and temporal resolutions yet. To address this issue, spatio-temporal image fusion algorithms were proposed to synthesize high spatial and temporal resolution images. For example, Landsat 8 with a spatial resolution of 30 m has been applied on water level detection, but it cannot capture dynamic events due to its low temporal resolution. On the other hand, The Advanced Himawari Imager (AHI) 8 only needs 10 minutes to watch the hemisphere once, but its coarse spatial resolution hampers the accurate mapping of sea level change. While our previous study has examined the feasibility of blending AHI and Landsat images, this study aims at blending SPOT imagery with AHI imagery to monitor the dynamic and local behavior of sea level changes. To be specific, first, images in the testing area are calibrated to surface reflectance and co-registered. The Normalized Difference Water Index (NDWI) is then calculated from SPOT and Himawari-8 images to be an input for the image fusion process. This study applies the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) as the image fusion method. While water level changes dynamically, traditional methods are largely affected by the changes of land cover. Hence, this study constructs a knowledge database to select proper land cover maps as an image fusion input. Finally, the evaluation result shows that the proposed solution can retrieve accurate water coverage with high spatial and temporal resolutions.Keywords - Spatial-temporal image fusion, STARFM, Himawari-8, SPOT, sea level monitoring Abstrak - Resolusi spasial dan temporal yang tinggi dari citra satelit diperlukan untuk memantau perubahan lingkungan yang cepat pada skala yang lebih baik. Namun, belum ada satupun satelit yang dapat menghasilkan gambar dengan resolusi spasial dan temporal yang tinggi. Untuk mengatasi masalah ini, proses penggabungan citra (image fusion) diaplikasikan untuk mensintesis citra dengan resolusi spasial dan temporal yang tinggi. Misalnya, Landsat 8 dengan resolusi spasial 30 m telah diterapkan pada deteksi ketinggian air, tetapi tidak dapat menangkap peristiwa dinamis karena resolusi temporal yang rendah. Di sisi lain, The Advanced Himawari Imager (AHI) 8 hanya membutuhkan waktu 10 menit untuk mengamati seluruh bumi dalam sekali orbit, namun resolusi spasialnya yang buruk dapat menghambat pemetaan perubahan permukaan air laut. Sementara studi sebelumnya telah menguji kelayakan untuk penggabungan citra AHI dan Landsat, maka studi ini bertujuan untuk menguji penggabungan citra SPOT dengan citra AHI untuk memantau dinamika dan perubahan permukaan air laut. Untuk lebih spesifik, pertama, citra di area studi dikalibrasi ke nilai surface reflectance dan kemudian co-registered. Normalized Difference Water Index (NDWI) kemudian dihitung dari citra SPOT dan Himawari-8 untuk dijadikan input saat proses penggabungan citra. Penelitian ini menggunakan Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) sebagai metode penggabungan citra. Sementara ketinggian air berubah secara dinamis, metode tradisional yang sudah ada sebagian besar dipengaruhi oleh perubahan tutupan lahan. Oleh karena itu, penelitian ini membangun database untuk memilih peta tutupan lahan yang tepat sebagai input penggabungan citra. Akhirnya, hasil evaluasi dari pengujian solusi yang diusulkan dapat memperoleh lahan air yang akurat dengan resolusi spasial dan temporal yang tinggi.Kata Kunci - Penggabungan citra spasial-temporal, STARFM, Himawari-8, SPOT, pemantauan ketinggian air lau

    Assessment of Multi-Temporal Image Fusion for Remote Sensing Application

    Get PDF
    Image fusion and subsequent scene analysis are important for studying Earth surface conditions from remotely sensed imagery. The fusion of the same scene using satellite data taken with different sensors or acquisition times is known as multi-sensor or multi-temporal fusion, respectively. The purpose of this study is to investigate the effects of misalignments the multi-sensor, multi-temporal fusion process when a pan-sharpened scene is produced from low spatial resolution multispectral (MS) images and a high spatial resolution panchromatic (PAN) image. It is found that the component substitution (CS) fusion method provides better performance than the multi-resolution analysis (MRA) scheme. Quantitative analysis shows that the CS-based method gives a better result in terms of spatial quality (sharpness), whereas the MRA-based method yields better spectral quality, i.e., better color fidelity to the original MS images

    Exploiting Image-trained CNN Architectures for Unconstrained Video Classification

    Full text link
    We conduct an in-depth exploration of different strategies for doing event detection in videos using convolutional neural networks (CNNs) trained for image classification. We study different ways of performing spatial and temporal pooling, feature normalization, choice of CNN layers as well as choice of classifiers. Making judicious choices along these dimensions led to a very significant increase in performance over more naive approaches that have been used till now. We evaluate our approach on the challenging TRECVID MED'14 dataset with two popular CNN architectures pretrained on ImageNet. On this MED'14 dataset, our methods, based entirely on image-trained CNN features, can outperform several state-of-the-art non-CNN models. Our proposed late fusion of CNN- and motion-based features can further increase the mean average precision (mAP) on MED'14 from 34.95% to 38.74%. The fusion approach achieves the state-of-the-art classification performance on the challenging UCF-101 dataset

    Towards Real-Time Detection and Tracking of Spatio-Temporal Features: Blob-Filaments in Fusion Plasma

    Full text link
    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.Comment: 14 pages, 40 figure

    OBSUM: An object-based spatial unmixing model for spatiotemporal fusion of remote sensing images

    Full text link
    Spatiotemporal fusion aims to improve both the spatial and temporal resolution of remote sensing images, thus facilitating time-series analysis at a fine spatial scale. However, there are several important issues that limit the application of current spatiotemporal fusion methods. First, most spatiotemporal fusion methods are based on pixel-level computation, which neglects the valuable object-level information of the land surface. Moreover, many existing methods cannot accurately retrieve strong temporal changes between the available high-resolution image at base date and the predicted one. This study proposes an Object-Based Spatial Unmixing Model (OBSUM), which incorporates object-based image analysis and spatial unmixing, to overcome the two abovementioned problems. OBSUM consists of one preprocessing step and three fusion steps, i.e., object-level unmixing, object-level residual compensation, and pixel-level residual compensation. OBSUM can be applied using only one fine image at the base date and one coarse image at the prediction date, without the need of a coarse image at the base date. The performance of OBSUM was compared with five representative spatiotemporal fusion methods. The experimental results demonstrated that OBSUM outperformed other methods in terms of both accuracy indices and visual effects over time-series. Furthermore, OBSUM also achieved satisfactory results in two typical remote sensing applications. Therefore, it has great potential to generate accurate and high-resolution time-series observations for supporting various remote sensing applications
    corecore