61 research outputs found

    Guided Nonlocal Patch Regularization and Efficient Filtering-Based Inversion for Multiband Fusion

    Full text link
    In multiband fusion, an image with a high spatial and low spectral resolution is combined with an image with a low spatial but high spectral resolution to produce a single multiband image having high spatial and spectral resolutions. This comes up in remote sensing applications such as pansharpening~(MS+PAN), hyperspectral sharpening~(HS+PAN), and HS-MS fusion~(HS+MS). Remote sensing images are textured and have repetitive structures. Motivated by nonlocal patch-based methods for image restoration, we propose a convex regularizer that (i) takes into account long-distance correlations, (ii) penalizes patch variation, which is more effective than pixel variation for capturing texture information, and (iii) uses the higher spatial resolution image as a guide image for weight computation. We come up with an efficient ADMM algorithm for optimizing the regularizer along with a standard least-squares loss function derived from the imaging model. The novelty of our algorithm is that by expressing patch variation as filtering operations and by judiciously splitting the original variables and introducing latent variables, we are able to solve the ADMM subproblems efficiently using FFT-based convolution and soft-thresholding. As far as the reconstruction quality is concerned, our method is shown to outperform state-of-the-art variational and deep learning techniques.Comment: Accepted in IEEE Transactions on Computational Imagin

    Generative Adversarial Network for Pansharpening With Spectral and Spatial Discriminators

    Get PDF
    The pansharpening problem amounts to fusing a high-resolution panchromatic image with a low-resolution multispectral image so as to obtain a high-resolution multispectral image. Therefore, the preservation of the spatial resolution of the panchromatic image and the spectral resolution of the multispectral image is of key importance for the pansharpening problem. To cope with it, we propose a new method based on a bidiscriminator in a generative adversarial network (GAN) framework. The first discriminator is optimized to preserve textures of images by taking as input the luminance and the near-infrared band of images, and the second discriminator preserves the color by comparing the chroma components Cb and Cr. Thus, this method allows to train two discriminators, each one with a different and complementary task. Moreover, to enhance these aspects, the proposed method based on bidiscriminator, and called MDSSC-GAN SAM, considers a spatial and a spectral constraint in the loss function of the generator. We show the advantages of this new method on experiments carried out on Pléiades and World View 3 satellite images.Super-résolution d'images multi-échelles en sciences des matériaux avec des attributs géométrique

    Robust Hyperspectral Image Fusion with Simultaneous Guide Image Denoising via Constrained Convex Optimization

    Full text link
    The paper proposes a new high spatial resolution hyperspectral (HR-HS) image estimation method based on convex optimization. The method assumes a low spatial resolution HS (LR-HS) image and a guide image as observations, where both observations are contaminated by noise. Our method simultaneously estimates an HR-HS image and a noiseless guide image, so the method can utilize spatial information in a guide image even if it is contaminated by heavy noise. The proposed estimation problem adopts hybrid spatio-spectral total variation as regularization and evaluates the edge similarity between HR-HS and guide images to effectively use apriori knowledge on an HR-HS image and spatial detail information in a guide image. To efficiently solve the problem, we apply a primal-dual splitting method. Experiments demonstrate the performance of our method and the advantage over several existing methods.Comment: Accepted to IEEE Transactions on Geoscience and Remote Sensin

    AN IMPROVED VARIATIONAL METHOD FOR HYPERSPECTRAL IMAGE PANSHARPENING WITH THE CONSTRAINT OF SPECTRAL DIFFERENCE MINIMIZATION

    Get PDF
    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information

    Pansharpening via Frequency-Aware Fusion Network with Explicit Similarity Constraints

    Full text link
    The process of fusing a high spatial resolution (HR) panchromatic (PAN) image and a low spatial resolution (LR) multispectral (MS) image to obtain an HRMS image is known as pansharpening. With the development of convolutional neural networks, the performance of pansharpening methods has been improved, however, the blurry effects and the spectral distortion still exist in their fusion results due to the insufficiency in details learning and the frequency mismatch between MSand PAN. Therefore, the improvement of spatial details at the premise of reducing spectral distortion is still a challenge. In this paper, we propose a frequency-aware fusion network (FAFNet) together with a novel high-frequency feature similarity loss to address above mentioned problems. FAFNet is mainly composed of two kinds of blocks, where the frequency aware blocks aim to extract features in the frequency domain with the help of discrete wavelet transform (DWT) layers, and the frequency fusion blocks reconstruct and transform the features from frequency domain to spatial domain with the assistance of inverse DWT (IDWT) layers. Finally, the fusion results are obtained through a convolutional block. In order to learn the correspondence, we also propose a high-frequency feature similarity loss to constrain the HF features derived from PAN and MS branches, so that HF features of PAN can reasonably be used to supplement that of MS. Experimental results on three datasets at both reduced- and full-resolution demonstrate the superiority of the proposed method compared with several state-of-the-art pansharpening models.Comment: 14 page

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area

    A Benchmarking Protocol for SAR Colorization: From Regression to Deep Learning Approaches

    Full text link
    Synthetic aperture radar (SAR) images are widely used in remote sensing. Interpreting SAR images can be challenging due to their intrinsic speckle noise and grayscale nature. To address this issue, SAR colorization has emerged as a research direction to colorize gray scale SAR images while preserving the original spatial information and radiometric information. However, this research field is still in its early stages, and many limitations can be highlighted. In this paper, we propose a full research line for supervised learning-based approaches to SAR colorization. Our approach includes a protocol for generating synthetic color SAR images, several baselines, and an effective method based on the conditional generative adversarial network (cGAN) for SAR colorization. We also propose numerical assessment metrics for the problem at hand. To our knowledge, this is the first attempt to propose a research line for SAR colorization that includes a protocol, a benchmark, and a complete performance evaluation. Our extensive tests demonstrate the effectiveness of our proposed cGAN-based network for SAR colorization. The code will be made publicly available.Comment: 16 pages, 16 figures, 6 table

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version
    • …
    corecore