3,156 research outputs found

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    Remote sensing image fusion via compressive sensing

    Get PDF
    In this paper, we propose a compressive sensing-based method to pan-sharpen the low-resolution multispectral (LRM) data, with the help of high-resolution panchromatic (HRP) data. In order to successfully implement the compressive sensing theory in pan-sharpening, two requirements should be satisfied: (i) forming a comprehensive dictionary in which the estimated coefficient vectors are sparse; and (ii) there is no correlation between the constructed dictionary and the measurement matrix. To fulfill these, we propose two novel strategies. The first is to construct a dictionary that is trained with patches across different image scales. Patches at different scales or equivalently multiscale patches provide texture atoms without requiring any external database or any prior atoms. The redundancy of the dictionary is removed through K-singular value decomposition (K-SVD). Second, we design an iterative l1-l2 minimization algorithm based on alternating direction method of multipliers (ADMM) to seek the sparse coefficient vectors. The proposed algorithm stacks missing high-resolution multispectral (HRM) data with the captured LRM data, so that the latter is used as a constraint for the estimation of the former during the process of seeking the representation coefficients. Three datasets are used to test the performance of the proposed method. A comparative study between the proposed method and several state-of-the-art ones shows its effectiveness in dealing with complex structures of remote sensing imagery

    Assessment of Multi-Temporal Image Fusion for Remote Sensing Application

    Get PDF
    Image fusion and subsequent scene analysis are important for studying Earth surface conditions from remotely sensed imagery. The fusion of the same scene using satellite data taken with different sensors or acquisition times is known as multi-sensor or multi-temporal fusion, respectively. The purpose of this study is to investigate the effects of misalignments the multi-sensor, multi-temporal fusion process when a pan-sharpened scene is produced from low spatial resolution multispectral (MS) images and a high spatial resolution panchromatic (PAN) image. It is found that the component substitution (CS) fusion method provides better performance than the multi-resolution analysis (MRA) scheme. Quantitative analysis shows that the CS-based method gives a better result in terms of spatial quality (sharpness), whereas the MRA-based method yields better spectral quality, i.e., better color fidelity to the original MS images

    A method to better account for modulation transfer functions in ARSIS-based pansharpening methods

    No full text
    International audienceMultispectral (MS) images provided by Earth observation satellites have generally a poor spatial resolution while panchromatic images (PAN) exhibit a spatial resolution two or four times better. Data fusion is a means to synthesize MS images at higher spatial resolution than original by exploiting the high spatial resolution of the PAN. This process is often called pansharpening. The synthesis property states that the synthesized MS images should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. The methods based on the concept Amélioration de la Résolution Spatiale par Injection de Structures (ARSIS) are able to deliver synthesized images with good spectral quality but whose geometrical quality can still be improved. We propose a more precise definition of the synthesis property in terms of geometry. Then, we present a method that takes explicitly into account the difference in modulation transfer function (MTF) between PAN and MS in the fusion process. This method is applied to an existing ARSIS-based fusion method, i.e., A trou wavelet transform-model 3. Simulated images of the sensors Pleiades and SPOT-5 are used to illustrate the performances of the approach. Although this paper is limited in methods and data, we observe a better restitution of the geometry and an improvement in all indices classically used in quality budget in pansharpening. We present also a means to assess the respect of the synthesis property from an MTF point of view

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version
    • …
    corecore