394 research outputs found

    Hyperspectral super-resolution of locally low rank images from complementary multisource data

    Get PDF
    International audienceRemote sensing hyperspectral images (HSI) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images (MSI) in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution via local dictionary learning using endmember induction algorithms (HSR-LDL-EIA). We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data

    Super-resolution of hyperspectral images using local spectral unmixing

    No full text
    International audienceFor many remote sensing applications it is preferable to have images with both high spectral and spatial resolutions. On this regards, hyperspectral and multispectral images have complementary characteristics in terms of spectral and spatial resolutions. In this paper we propose an approach for the fusion of low spatial resolution hyperspectral images with high spatial resolution multispectral images in order to obtain superresolution (spatial and spectral) hyperspectral images. The proposed approach is based on the assumption that, since both hyperspectral and multispectral images acquired on the same scene, the corresponding endmembers should be the same. On a first step the hyperspectral image is spectrally downsampled in order to match the multispectral one. Then an endmember extraction algorithm is performed on the downsampled hyperspectral image and the successive abundance estimation is performed on the multispectral one. Finally, the extracted endmembers are up-sampled back to the original hyperspectral space and then used to reconstruct the super-resolution hyperspectral image according to the abundances obtained from the multispectral image

    Hyperspectral and Multispectral Image Fusion using Optimized Twin Dictionaries

    Get PDF
    Spectral or spatial dictionary has been widely used in fusing low-spatial-resolution hyperspectral (LH) images and high-spatial-resolution multispectral (HM) images. However, only using spectral dictionary is insufficient for preserving spatial information, and vice versa. To address this problem, a new LH and HM image fusion method termed OTD using optimized twin dictionaries is proposed in this paper. The fusion problem of OTD is formulated analytically in the framework of sparse representation, as an optimization of twin spectral-spatial dictionaries and their corresponding sparse coefficients. More specifically, the spectral dictionary representing the generalized spectrums and its spectral sparse coefficients are optimized by utilizing the observed LH and HM images in the spectral domain; and the spatial dictionary representing the spatial information and its spatial sparse coefficients are optimized by modeling the rest of high-frequency information in the spatial domain. In addition, without non-negative constraints, the alternating direction methods of multipliers (ADMM) are employed to implement the above optimization process. Comparison results with the related state-of-the-art fusion methods on various datasets demonstrate that our proposed OTD method achieves a better fusion performance in both spatial and spectral domains

    Fusing Multiple Multiband Images

    Full text link
    We consider the problem of fusing an arbitrary number of multiband, i.e., panchromatic, multispectral, or hyperspectral, images belonging to the same scene. We use the well-known forward observation and linear mixture models with Gaussian perturbations to formulate the maximum-likelihood estimator of the endmember abundance matrix of the fused image. We calculate the Fisher information matrix for this estimator and examine the conditions for the uniqueness of the estimator. We use a vector total-variation penalty term together with nonnegativity and sum-to-one constraints on the endmember abundances to regularize the derived maximum-likelihood estimation problem. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. We solve the resultant convex optimization problem using the alternating direction method of multipliers. We utilize the circular convolution theorem in conjunction with the fast Fourier transform to alleviate the computational complexity of the proposed algorithm. Experiments with multiband images constructed from real hyperspectral datasets reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images

    Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning

    Full text link
    Extensive attention has been widely paid to enhance the spatial resolution of hyperspectral (HS) images with the aid of multispectral (MS) images in remote sensing. However, the ability in the fusion of HS and MS images remains to be improved, particularly in large-scale scenes, due to the limited acquisition of HS images. Alternatively, we super-resolve MS images in the spectral domain by the means of partially overlapped HS images, yielding a novel and promising topic: spectral superresolution (SSR) of MS imagery. This is challenging and less investigated task due to its high ill-posedness in inverse imaging. To this end, we develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS-MS dictionary pairs from overlapped regions. J-SLoL infers and recovers the unknown hyperspectral signals over a larger coverage by sparse coding on the learned dictionary pair. Furthermore, we validate the SSR performance on three HS-MS datasets (two for classification and one for unmixing) in terms of reconstruction, classification, and unmixing by comparing with several existing state-of-the-art baselines, showing the effectiveness and superiority of the proposed J-SLoL algorithm. Furthermore, the codes and datasets will be available at: https://github.com/danfenghong/IEEE\_TGRS\_J-SLoL, contributing to the RS community

    Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution

    Full text link
    In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018, Spotlight
    • …
    corecore