2,060 research outputs found

    Image fusion for spatial enhancement of hyperspectral image via pixel group based non-local sparse representation

    Get PDF
    Restricted by technical and budget constraints, hyperspectral images (HSIs) are usually obtained with low spatial resolution. In order to improve the spatial resolution of a given hyperspectral image, a new spatial and spectral image fusion approach via pixel group based non-local sparse representation is proposed, which exploits the spectral sparsity and spectral non-local self-similarity of the hyperspectral image. The proposed approach fuses the hyperspectral image with a high-spatial-resolution multispectral image of the same scene to obtain a hyperspectral image with high spatial and spectral resolutions. The input hyperspectral image is used to train the spectral dictionary, while the sparse codes of the desired HSI are estimated by jointly encoding the similar pixels in each pixel group extracted from the high-spatial-resolution multispectral image. To improve the accuracy of the pixel group based non-local sparse representation, the similar pixels in a pixel group are selected by utilizing both the spectral and spatial information. The performance of the proposed approach is tested on two remote sensing image datasets. Experimental results suggest that the proposed method outperforms a number of sparse representation based fusion techniques, and can preserve the spectral information while recovering the spatial details under large magnification factors

    Fusion of multispectral and hyperspectral images based on sparse representation

    Get PDF
    National audienceThis paper presents an algorithm based on sparse representation for fusing hyperspectral and multispectral images. The observed images are assumed to be obtained by spectral or spatial degradations of the high resolution hyperspectral image to be recovered. Based on this forward model, the fusion process is formulated as an inverse problem whose solution is determined by optimizing an appropriate criterion. To incorporate additional spatial information within the objective criterion, a regularization term is carefully designed,relying on a sparse decomposition of the scene on a set of dictionaryies. The dictionaries and the corresponding supports of active coding coef�cients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved by iteratively optimizing with respect to the target image (using the alternating direction method of multipliers) and the coding coefcients. Simulation results demonstrate the ef�ciency of the proposed fusion method when compared with the state-of-the-art

    A convex formulation for hyperspectral image superresolution via subspace-based regularization

    Full text link
    Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images which combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector Total Variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the non-quadratic and non-smooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally "live" in a low-dimensional subspace and by tailoring the Split Augmented Lagrangian Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction Method of Multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state-of-the-art, as illustrated in a series of experiments with simulated and real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe

    Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution

    Full text link
    In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018, Spotlight

    Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net

    Full text link
    Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS/HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS/HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure
    • …
    corecore