120 research outputs found

    Two-stage fusion of thermal hyperspectral and visible RGB image by PCA and guided filter

    Get PDF
    Nowadays, advanced technology in remote sensing allows us to get multi-sensor and multi-resolution data from the same region. Fusion of these data sources for classification remains challenging problems. In this paper, we propose a novel algorithm for hyperspectral (HS) image pansharpening with two stage guided filtering in PCA (principal component analysis) domain. In the first stage, we first downsample the high resolution RGB image to the same spatial resolution of original low-resolution HS image, and use guided filter to transfer the image details (e.g. edge) of the downsampled RGB image to the original HS image in the PCA domain In the second stage, we perform upsampling on the resulting HS image from the first stage by using original high-resolution RGB image and guided filter in PCA domain. This yields a clear improvement over an older approach with one stage guided filtering in PCA domain. Experimental results on fusion of a low spatial-resolution Thermal Infrared HS image and a high spatial-resolution visible RGB image from the 2014 IEEE GRSS Data Fusion Contest, are very encouraging

    A New Pansharpening Approach for Hyperspectral Images

    Get PDF
    We first briefly review recent papers for pansharpening of hyperspectral (HS) images. We then present a recent pansharpening approach called hybrid color mapping (HCM). A few variants of HCM are then summarized. Using two hyperspectral images, we illustrate the advantages of HCM by comparing HCM with 10 state-of-the-art algorithms

    Guided Nonlocal Patch Regularization and Efficient Filtering-Based Inversion for Multiband Fusion

    Full text link
    In multiband fusion, an image with a high spatial and low spectral resolution is combined with an image with a low spatial but high spectral resolution to produce a single multiband image having high spatial and spectral resolutions. This comes up in remote sensing applications such as pansharpening~(MS+PAN), hyperspectral sharpening~(HS+PAN), and HS-MS fusion~(HS+MS). Remote sensing images are textured and have repetitive structures. Motivated by nonlocal patch-based methods for image restoration, we propose a convex regularizer that (i) takes into account long-distance correlations, (ii) penalizes patch variation, which is more effective than pixel variation for capturing texture information, and (iii) uses the higher spatial resolution image as a guide image for weight computation. We come up with an efficient ADMM algorithm for optimizing the regularizer along with a standard least-squares loss function derived from the imaging model. The novelty of our algorithm is that by expressing patch variation as filtering operations and by judiciously splitting the original variables and introducing latent variables, we are able to solve the ADMM subproblems efficiently using FFT-based convolution and soft-thresholding. As far as the reconstruction quality is concerned, our method is shown to outperform state-of-the-art variational and deep learning techniques.Comment: Accepted in IEEE Transactions on Computational Imagin

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization

    Get PDF
    Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria.  The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods

    CrossDiff: Exploring Self-Supervised Representation of Pansharpening via Cross-Predictive Diffusion Model

    Full text link
    Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening, which aims to combine abundant spatial details of PAN and spectral information of MS. Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution. When taking original MS and PAN images as inputs, they always obtain sub-optimal results due to the scale variation. In this paper, we propose to explore the self-supervised representation of pansharpening by designing a cross-predictive diffusion model, named CrossDiff. It has two-stage training. In the first stage, we introduce a cross-predictive pretext task to pre-train the UNet structure based on conditional DDPM, while in the second stage, the encoders of the UNets are frozen to directly extract spatial and spectral features from PAN and MS, and only the fusion head is trained to adapt for pansharpening task. Extensive experiments show the effectiveness and superiority of the proposed model compared with state-of-the-art supervised and unsupervised methods. Besides, the cross-sensor experiments also verify the generalization ability of proposed self-supervised representation learners for other satellite's datasets. We will release our code for reproducibility
    • …
    corecore