78 research outputs found

    Target-adaptive CNN-based pansharpening

    Full text link
    We recently proposed a convolutional neural network (CNN) for remote sensing image pansharpening obtaining a significant performance gain over the state of the art. In this paper, we explore a number of architectural and training variations to this baseline, achieving further performance gains with a lightweight network which trains very fast. Leveraging on this latter property, we propose a target-adaptive usage modality which ensures a very good performance also in the presence of a mismatch w.r.t. the training set, and even across different sensors. The proposed method, published online as an off-the-shelf software tool, allows users to perform fast and high-quality CNN-based pansharpening of their own target images on general-purpose hardware

    Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover

    Get PDF
    Whenever vegetated areas are monitored over time, phenological changes in land cover should be decoupled from changes in acquisition conditions, like atmospheric components, Sun and satellite heights and imaging instrument. This especially holds when the multispectral (MS) bands are sharpened for spatial resolution enhancement by means of a panchromatic (Pan) image of higher resolution, a process referred to as pansharpening. In this paper, we provide evidence that pansharpening of visible/near-infrared (VNIR) bands takes advantage of a correction of the path radiance term introduced by the atmosphere, during the fusion process. This holds whenever the fusion mechanism emulates the radiative transfer model ruling the acquisition of the Earth's surface from space, that is for methods exploiting a multiplicative, or contrast-based, injection model of spatial details extracted from the panchromatic (Pan) image into the interpolated multispectral (MS) bands. The path radiance should be estimated and subtracted from each band before the product by Pan is accomplished. Both empirical and model-based estimation techniques of MS path radiances are compared within the framework of optimized algorithms. Simulations carried out on two GeoEye-1 observations of the same agricultural landscape on different dates highlight that the de-hazing of MS before fusion is beneficial to an accurate detection of seasonal changes in the scene, as measured by the normalized differential vegetation index (NDVI)

    Contrast and Error-Based Fusion Schemes for Multispectral Image Pansharpening

    Full text link

    DDRF: Denoising Diffusion Model for Remote Sensing Image Fusion

    Full text link
    Denosing diffusion model, as a generative model, has received a lot of attention in the field of image generation recently, thanks to its powerful generation capability. However, diffusion models have not yet received sufficient research in the field of image fusion. In this article, we introduce diffusion model to the image fusion field, treating the image fusion task as image-to-image translation and designing two different conditional injection modulation modules (i.e., style transfer modulation and wavelet modulation) to inject coarse-grained style information and fine-grained high-frequency and low-frequency information into the diffusion UNet, thereby generating fused images. In addition, we also discussed the residual learning and the selection of training objectives of the diffusion model in the image fusion task. Extensive experimental results based on quantitative and qualitative assessments compared with benchmarks demonstrates state-of-the-art results and good generalization performance in image fusion tasks. Finally, it is hoped that our method can inspire other works and gain insight into this field to better apply the diffusion model to image fusion tasks. Code shall be released for better reproducibility

    Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization

    Get PDF
    Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria.  The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods

    A New Pansharpening Approach for Hyperspectral Images

    Get PDF
    We first briefly review recent papers for pansharpening of hyperspectral (HS) images. We then present a recent pansharpening approach called hybrid color mapping (HCM). A few variants of HCM are then summarized. Using two hyperspectral images, we illustrate the advantages of HCM by comparing HCM with 10 state-of-the-art algorithms

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version
    • …
    corecore