227 research outputs found

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Online Graph-Based Change Point Detection in Multiband Image Sequences

    Full text link
    The automatic detection of changes or anomalies between multispectral and hyperspectral images collected at different time instants is an active and challenging research topic. To effectively perform change-point detection in multitemporal images, it is important to devise techniques that are computationally efficient for processing large datasets, and that do not require knowledge about the nature of the changes. In this paper, we introduce a novel online framework for detecting changes in multitemporal remote sensing images. Acting on neighboring spectra as adjacent vertices in a graph, this algorithm focuses on anomalies concurrently activating groups of vertices corresponding to compact, well-connected and spectrally homogeneous image regions. It fully benefits from recent advances in graph signal processing to exploit the characteristics of the data that lie on irregular supports. Moreover, the graph is estimated directly from the images using superpixel decomposition algorithms. The learning algorithm is scalable in the sense that it is efficient and spatially distributed. Experiments illustrate the detection and localization performance of the method

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

    Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super-Resolution

    Full text link
    Due to the limitations of hyperspectral imaging systems, hyperspectral imagery (HSI) often suffers from poor spatial resolution, thus hampering many applications of the imagery. Hyperspectral super-resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions. Recently, several new methods have been proposed to solve this fusion problem, and most of these methods assume that the prior information of the Point Spread Function (PSF) and Spectral Response Function (SRF) are known. However, in practice, this information is often limited or unavailable. In this work, an unsupervised deep learning-based fusion method - HyCoNet - that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information is proposed. HyCoNet consists of three coupled autoencoder nets in which the HSI and MSI are unmixed into endmembers and abundances based on the linear unmixing model. Two special convolutional layers are designed to act as a bridge that coordinates with the three autoencoder nets, and the PSF and SRF parameters are learned adaptively in the two convolution layers during the training process. Furthermore, driven by the joint loss function, the proposed method is straightforward and easily implemented in an end-to-end training manner. The experiments performed in the study demonstrate that the proposed method performs well and produces robust results for different datasets and arbitrary PSFs and SRFs

    Geographically Weighted Spatial Unmixing for Spatiotemporal Fusion

    Get PDF
    Spatiotemporal fusion is a technique applied to create images with both fine spatial and temporal resolutions by blending images with different spatial and temporal resolutions. Spatial unmixing (SU) is a widely used approach for spatiotemporal fusion, which requires only the minimum number of input images. However, ignorance of spatial variation in land cover between pixels is a common issue in existing SU methods. For example, all coarse neighbors in a local window are treated equally in the unmixing model, which is inappropriate. Moreover, the determination of the appropriate number of clusters in the known fine spatial resolution image remains a challenge. In this article, a geographically weighted SU (SU-GW) method was proposed to address the spatial variation in land cover and increase the accuracy of spatiotemporal fusion. SU-GW is a general model suitable for any SU method. Specifically, the existing regularized version and soft classification-based version were extended with the proposed geographically weighted scheme, producing 24 versions (i.e., 12 existing versions were extended to 12 corresponding geographically weighted versions) for SU. Furthermore, the cluster validity index of Xie and Beni (XB) was introduced to determine automatically the number of clusters. A systematic comparison between the experimental results of the 24 versions indicated that SU-GW was effective in increasing the prediction accuracy. Importantly, all 12 existing methods were enhanced by integrating the SU-GW scheme. Moreover, the identified most accurate SU-GW enhanced version was demonstrated to outperform two prevailing spatiotemporal fusion approaches in a benchmark comparison. Therefore, it can be concluded that SU-GW provides a general solution for enhancing spatiotemporal fusion, which can be used to update existing methods and future potential versions

    Integrating multimodal Raman and photoluminescence microscopy with enhanced insights through multivariate analysis

    Get PDF
    This paper introduces a novel multimodal optical microscope, integrating Raman and laser-induced photoluminescence (PL) spectroscopy for the analysis of micro-samples relevant in Heritage Science. Micro-samples extracted from artworks, such as paintings, exhibit intricate material compositions characterized by high complexity and spatial heterogeneity, featuring multiple layers of paint that may be also affected by degradation phenomena. Employing a multimodal strategy becomes imperative for a comprehensive understanding of their material composition and condition. The effectiveness of the proposed setup derives from synergistically harnessing the distinct strengths of Raman and laser-induced PL spectroscopy. The capacity to identify various chemical species through the latter technique is enhanced by using multiple excitation wavelengths and two distinct excitation fluence regimes. The combination of the two complementary techniques allows the setup to effectively achieve comprehensive chemical mapping of sample through a raster scanning approach. To attain a competitive overall measurement time, we employ a short integration time for each measurement point. We further propose an analysis protocol rooted in a multivariate approach. Specifically, we employ non-negative matrix factorization as the spectral decomposition method. This enables the identification of spectral endmembers, effectively correlated with specific chemical compounds present in samples. To demonstrate its efficacy in Heritage Science, we present examples involving pigment powder dispersions and stratigraphic micro-samples from paintings. Through these examples, we show how the multimodal approach reinforces material identification and, more importantly, facilitates the extraction of complementary information. This is pivotal as the two optical techniques exhibit sensitivity to different materials. Looking ahead, our method holds potential applications in diverse research fields, including material science and biology
    • …
    corecore