3,331 research outputs found

    Hyperspectral Image Super-Resolution Using Optimization and DCNN-Based Methods

    Get PDF
    Reconstructing a high-resolution (HR) hyperspectral (HS) image from the observed low-resolution (LR) hyperspectral image or a high-resolution multispectral (RGB) image obtained using the exiting imaging cameras is an important research topic for capturing comprehensive scene information in both spatial and spectral domains. The HR-HS hyperspectral image reconstruction mainly consists of two research strategies: optimization-based and the deep convolutional neural network-based learning methods. The optimization-based approaches estimate HR-HS image via minimizing the reconstruction errors of the available low-resolution hyperspectral and high-resolution multispectral images with different constrained prior knowledge such as representation sparsity, spectral physical properties, spatial smoothness, and so on. Recently, deep convolutional neural network (DCNN) has been applied to resolution enhancement of natural images and is proven to achieve promising performance. This chapter provides a comprehensive description of not only the conventional optimization-based methods but also the recently investigated DCNN-based learning methods for HS image super-resolution, which mainly include spectral reconstruction CNN and spatial and spectral fusion CNN. Experiment results on benchmark datasets have been shown for validating effectiveness of HS image super-resolution in both quantitative values and visual effect

    Super-resolution of hyperspectral images using local spectral unmixing

    No full text
    International audienceFor many remote sensing applications it is preferable to have images with both high spectral and spatial resolutions. On this regards, hyperspectral and multispectral images have complementary characteristics in terms of spectral and spatial resolutions. In this paper we propose an approach for the fusion of low spatial resolution hyperspectral images with high spatial resolution multispectral images in order to obtain superresolution (spatial and spectral) hyperspectral images. The proposed approach is based on the assumption that, since both hyperspectral and multispectral images acquired on the same scene, the corresponding endmembers should be the same. On a first step the hyperspectral image is spectrally downsampled in order to match the multispectral one. Then an endmember extraction algorithm is performed on the downsampled hyperspectral image and the successive abundance estimation is performed on the multispectral one. Finally, the extracted endmembers are up-sampled back to the original hyperspectral space and then used to reconstruct the super-resolution hyperspectral image according to the abundances obtained from the multispectral image

    Dual-Stage Approach Toward Hyperspectral Image Super-Resolution

    Full text link
    Hyperspectral image produces high spectral resolution at the sacrifice of spatial resolution. Without reducing the spectral resolution, improving the resolution in the spatial domain is a very challenging problem. Motivated by the discovery that hyperspectral image exhibits high similarity between adjacent bands in a large spectral range, in this paper, we explore a new structure for hyperspectral image super-resolution (DualSR), leading to a dual-stage design, i.e., coarse stage and fine stage. In coarse stage, five bands with high similarity in a certain spectral range are divided into three groups, and the current band is guided to study the potential knowledge. Under the action of alternative spectral fusion mechanism, the coarse SR image is super-resolved in band-by-band. In order to build model from a global perspective, an enhanced back-projection method via spectral angle constraint is developed in fine stage to learn the content of spatial-spectral consistency, dramatically improving the performance gain. Extensive experiments demonstrate the effectiveness of the proposed coarse stage and fine stage. Besides, our network produces state-of-the-art results against existing works in terms of spatial reconstruction and spectral fidelity

    A Spectral Diffusion Prior for Hyperspectral Image Super-Resolution

    Full text link
    Fusion-based hyperspectral image (HSI) super-resolution aims to produce a high-spatial-resolution HSI by fusing a low-spatial-resolution HSI and a high-spatial-resolution multispectral image. Such a HSI super-resolution process can be modeled as an inverse problem, where the prior knowledge is essential for obtaining the desired solution. Motivated by the success of diffusion models, we propose a novel spectral diffusion prior for fusion-based HSI super-resolution. Specifically, we first investigate the spectrum generation problem and design a spectral diffusion model to model the spectral data distribution. Then, in the framework of maximum a posteriori, we keep the transition information between every two neighboring states during the reverse generative process, and thereby embed the knowledge of trained spectral diffusion model into the fusion problem in the form of a regularization term. At last, we treat each generation step of the final optimization problem as its subproblem, and employ the Adam to solve these subproblems in a reverse sequence. Experimental results conducted on both synthetic and real datasets demonstrate the effectiveness of the proposed approach. The code of the proposed approach will be available on https://github.com/liuofficial/SDP

    Fusformer: A Transformer-based Fusion Approach for Hyperspectral Image Super-resolution

    Full text link
    Hyperspectral image has become increasingly crucial due to its abundant spectral information. However, It has poor spatial resolution with the limitation of the current imaging mechanism. Nowadays, many convolutional neural networks have been proposed for the hyperspectral image super-resolution problem. However, convolutional neural network (CNN) based methods only consider the local information instead of the global one with the limited kernel size of receptive field in the convolution operation. In this paper, we design a network based on the transformer for fusing the low-resolution hyperspectral images and high-resolution multispectral images to obtain the high-resolution hyperspectral images. Thanks to the representing ability of the transformer, our approach is able to explore the intrinsic relationships of features globally. Furthermore, considering the LR-HSIs hold the main spectral structure, the network focuses on the spatial detail estimation releasing from the burden of reconstructing the whole data. It reduces the mapping space of the proposed network, which enhances the final performance. Various experiments and quality indexes show our approach's superiority compared with other state-of-the-art methods

    Hyperspectral and multispectral image fusion via tensor sparsity regularization

    Get PDF
    Hyperspectral image (HSI) super-resolution scheme based on HSI and multispectral image (MSI) fusion has been a prevalent research theme in remote sensing. However, most of the existing HSI-MSI fusion (HMF) methods adopt the sparsity prior across spatial or spectral domains via vectorizing hyperspectral cubes along a certain dimension, which results in the spatial or spectral informations distortion. Moreover, the current HMF works rarely pay attention to leveraging the nonlocal similar structure over spatial domain of the HSI. In this paper, we propose a new HSI-MSI fusion approach via tensor sparsity regularization which can encode essential spatial and spectral sparsity of an HSI. Specifically, we study how to utilize reasonably the sparsity of tensor to describe the spatialspectral correlation hidden in an HSI. Then, we resort to an efficient optimization strategy based on the alternative direction multiplier method (ADMM) for solving the resulting minimization problem. Experimental results on Pavia University data verify the merits of the proposed HMF algorithm

    Multi-source imagery fusion using deep learning in a cloud computing platform

    Full text link
    Given the high availability of data collected by different remote sensing instruments, the data fusion of multi-spectral and hyperspectral images (HSI) is an important topic in remote sensing. In particular, super-resolution as a data fusion application using spatial and spectral domains is highly investigated because its fused images is used to improve the classification and tracking objects accuracy. On the other hand, the huge amount of data obtained by remote sensing instruments represent a key concern in terms of data storage, management and pre-processing. This paper proposes a Big Data Cloud platform using Hadoop and Spark to store, manages, and process remote sensing data. Also, a study over the parameter \textit{chunk size} is presented to suggest the appropriate value for this parameter to download imagery data from Hadoop into a Spark application, based on the format of our data. We also developed an alternative approach based on Long Short Term Memory trained with different patch sizes for super-resolution image. This approach fuse hyperspectral and multispectral images. As a result, we obtain images with high-spatial and high-spectral resolution. The experimental results show that for a chunk size of 64k, an average of 3.5s was required to download data from Hadoop into a Spark application. The proposed model for super-resolution provides a structural similarity index of 0.98 and 0.907 for the used dataset
    corecore