1,368 research outputs found

    Hyperspectral and Multispectral Image Fusion using Optimized Twin Dictionaries

    Get PDF
    Spectral or spatial dictionary has been widely used in fusing low-spatial-resolution hyperspectral (LH) images and high-spatial-resolution multispectral (HM) images. However, only using spectral dictionary is insufficient for preserving spatial information, and vice versa. To address this problem, a new LH and HM image fusion method termed OTD using optimized twin dictionaries is proposed in this paper. The fusion problem of OTD is formulated analytically in the framework of sparse representation, as an optimization of twin spectral-spatial dictionaries and their corresponding sparse coefficients. More specifically, the spectral dictionary representing the generalized spectrums and its spectral sparse coefficients are optimized by utilizing the observed LH and HM images in the spectral domain; and the spatial dictionary representing the spatial information and its spatial sparse coefficients are optimized by modeling the rest of high-frequency information in the spatial domain. In addition, without non-negative constraints, the alternating direction methods of multipliers (ADMM) are employed to implement the above optimization process. Comparison results with the related state-of-the-art fusion methods on various datasets demonstrate that our proposed OTD method achieves a better fusion performance in both spatial and spectral domains

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy

    Spectral Super-Resolution of Satellite Imagery with Generative Adversarial Networks

    Get PDF
    Hyperspectral (HS) data is the most accurate interpretation of surface as it provides fine spectral information with hundreds of narrow contiguous bands as compared to multispectral (MS) data whose bands cover bigger wavelength portions of the electromagnetic spectrum. This difference is noticeable in applications such as agriculture, geosciences, astronomy, etc. However, HS sensors lack on earth observing spacecraft due to its high cost. In this study, we propose a novel loss function for generative adversarial networks as a spectral-oriented and general-purpose solution to spectral super-resolution of satellite imagery. The proposed architecture learns mapping from MS to HS data, generating nearly 20x more bands than the given input. We show that we outperform the state-of-the-art methods by visual interpretation and statistical metrics.Les dades hiperspectrals (HS) són la interpretació més precisa de la superfície, ja que proporciona informació espectral fina amb centenars de bandes contigües estretes en comparació amb les dades multiespectrals (MS) les bandes cobreixen parts de longitud d'ona més grans de l'espectre electromagnètic. Aquesta diferència és notable en àmbits com l'agricultura, les geociències, l'astronomia, etc. No obstant això, els sensors HS manquen als satèl·lits d'observació terrestre a causa del seu elevat cost. En aquest estudi proposem una nova funció de cost per a Generative Adversarial Networks com a solució orientada a l'espectre i de propòsit general per la superresolució espectral d'imatges de satèl·lit. L'arquitectura proposada aprèn el mapatge de dades MS a HS, generant gairebé 20x més bandes que l'entrada donada. Mostrem que superem els mètodes state-of-the-art mitjançant la interpretació visual i les mètriques estadístiques.Los datos hiperspectral (HS) son la interpretación más precisa de la superficie, ya que proporciona información espectral fina con cientos de bandas contiguas estrechas en comparación con los datos multiespectrales (MS) cuyas bandas cubren partes de longitud de onda más grandes del espectro electromagnético. Esta diferencia es notable en ámbitos como la agricultura, las geociencias, la astronomía, etc. Sin embargo, los sensores HS escasean en los satélites de observación terrestre debido a su elevado coste. En este estudio proponemos una nueva función de coste para Generative Adversarial Networks como solución orientada al espectro y de propósito general para la super-resolución espectral de imágenes de satélite. La arquitectura propuesta aprende el mapeo de datos MS a HS, generando casi 20x más bandas que la entrada dada. Mostramos que superamos los métodos state-of-the-art mediante la interpretación visual y las métricas estadísticas

    Pansharpening via Frequency-Aware Fusion Network with Explicit Similarity Constraints

    Full text link
    The process of fusing a high spatial resolution (HR) panchromatic (PAN) image and a low spatial resolution (LR) multispectral (MS) image to obtain an HRMS image is known as pansharpening. With the development of convolutional neural networks, the performance of pansharpening methods has been improved, however, the blurry effects and the spectral distortion still exist in their fusion results due to the insufficiency in details learning and the frequency mismatch between MSand PAN. Therefore, the improvement of spatial details at the premise of reducing spectral distortion is still a challenge. In this paper, we propose a frequency-aware fusion network (FAFNet) together with a novel high-frequency feature similarity loss to address above mentioned problems. FAFNet is mainly composed of two kinds of blocks, where the frequency aware blocks aim to extract features in the frequency domain with the help of discrete wavelet transform (DWT) layers, and the frequency fusion blocks reconstruct and transform the features from frequency domain to spatial domain with the assistance of inverse DWT (IDWT) layers. Finally, the fusion results are obtained through a convolutional block. In order to learn the correspondence, we also propose a high-frequency feature similarity loss to constrain the HF features derived from PAN and MS branches, so that HF features of PAN can reasonably be used to supplement that of MS. Experimental results on three datasets at both reduced- and full-resolution demonstrate the superiority of the proposed method compared with several state-of-the-art pansharpening models.Comment: 14 page
    • …
    corecore