107 research outputs found

    Quality assessment by region in spot images fused by means dual-tree complex wavelet transform

    Get PDF
    This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process

    Target-adaptive CNN-based pansharpening

    Full text link
    We recently proposed a convolutional neural network (CNN) for remote sensing image pansharpening obtaining a significant performance gain over the state of the art. In this paper, we explore a number of architectural and training variations to this baseline, achieving further performance gains with a lightweight network which trains very fast. Leveraging on this latter property, we propose a target-adaptive usage modality which ensures a very good performance also in the presence of a mismatch w.r.t. the training set, and even across different sensors. The proposed method, published online as an off-the-shelf software tool, allows users to perform fast and high-quality CNN-based pansharpening of their own target images on general-purpose hardware

    Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net

    Full text link
    Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS/HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS/HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version

    Multi-resolution analysis techniques and nonlinear PCA for hybrid pansharpening applications

    No full text
    International audienceHyperspectral images have a higher spectral resolution (i.e., a larger number of bands covering the electromagnetic spectrum), but a lower spatial resolution with respect to multispectral or panchromatic acquisitions. For increasing the capabilities of the data in terms of utilization and interpretation, hyperspectral images having both high spectral and spatial resolution are desired. This can be achieved by combining the hyperspectral image with a high spatial resolution panchromatic image. These techniques are generally known as pansharpening and can be divided into component substitution (CS) and multi-resolution analysis (MRA) based methods. In general, the CS methods result in fused images having high spatial quality but the fused images suffer from spectral distortions. On the other hand, images obtained using MRA techniques are not as sharp as CS methods but they are spectrally consistent. Both substitution and filtering approaches are considered adequate when applied to multispectral and PAN images, but have many drawbacks when the low-resolution image is a hyperspectral image. Thus, one of the main challenges in hyperspectral pansharpening is to improve the spatial resolution while preserving as much as possible of the original spectral information. An effective solution to these problems has been found in the use of hybrid approaches, combining the better spatial information of CS and the more accurate spectral information of MRA techniques. In general, in a hybrid approach a CS technique is used to project the original data into a low dimensionality space. Thus, the PAN image is fused with one or more features by means of MRA approach. Finally the inverse projection is used to obtain the enhanced image in the original data space. These methods, permit to effectively enhance the spatial resolution of the hyperspectral image without relevant spectral distortions and on the same time to reduce the computational load of the entire process. In particular, in this paper we focus our attention on the use of Non-linear Principal Component Analysis (NLPCA) for the projection of the image into a low dimensionality feature space. However, if on one hand the NLPCA has been proved to better represent the intrinsic information of hyperspectral images in the feature space, on the other hand, an analysis of the impact of different fusion techniques applied to the nonlinear principal components in order to define the optimal framework for the hybrid pansharpening has not been carried out yet. More in particular, in this paper we analyze the overall impact of several widely used MRA pansharpening algorithms applied in the nonlinear feature space. The results obtained on both synthetic and real data demonstrate that, an accurate selection of the pansharpening method can lead to an effective improvement of the enhanced hyperspectral image in terms of spectral quality and spatial consistency, as well as a strong reduction in the computational time

    Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover

    Get PDF
    Whenever vegetated areas are monitored over time, phenological changes in land cover should be decoupled from changes in acquisition conditions, like atmospheric components, Sun and satellite heights and imaging instrument. This especially holds when the multispectral (MS) bands are sharpened for spatial resolution enhancement by means of a panchromatic (Pan) image of higher resolution, a process referred to as pansharpening. In this paper, we provide evidence that pansharpening of visible/near-infrared (VNIR) bands takes advantage of a correction of the path radiance term introduced by the atmosphere, during the fusion process. This holds whenever the fusion mechanism emulates the radiative transfer model ruling the acquisition of the Earth's surface from space, that is for methods exploiting a multiplicative, or contrast-based, injection model of spatial details extracted from the panchromatic (Pan) image into the interpolated multispectral (MS) bands. The path radiance should be estimated and subtracted from each band before the product by Pan is accomplished. Both empirical and model-based estimation techniques of MS path radiances are compared within the framework of optimized algorithms. Simulations carried out on two GeoEye-1 observations of the same agricultural landscape on different dates highlight that the de-hazing of MS before fusion is beneficial to an accurate detection of seasonal changes in the scene, as measured by the normalized differential vegetation index (NDVI)

    Band-wise Hyperspectral Image Pansharpening using CNN Model Propagation

    Full text link
    Hyperspectral pansharpening is receiving a growing interest since the last few years as testified by a large number of research papers and challenges. It consists in a pixel-level fusion between a lower-resolution hyperspectral datacube and a higher-resolution single-band image, the panchromatic image, with the goal of providing a hyperspectral datacube at panchromatic resolution. Thanks to their powerful representational capabilities, deep learning models have succeeded to provide unprecedented results on many general purpose image processing tasks. However, when moving to domain specific problems, as in this case, the advantages with respect to traditional model-based approaches are much lesser clear-cut due to several contextual reasons. Scarcity of training data, lack of ground-truth, data shape variability, are some such factors that limit the generalization capacity of the state-of-the-art deep learning networks for hyperspectral pansharpening. To cope with these limitations, in this work we propose a new deep learning method which inherits a simple single-band unsupervised pansharpening model nested in a sequential band-wise adaptive scheme, where each band is pansharpened refining the model tuned on the preceding one. By doing so, a simple model is propagated along the wavelength dimension, adaptively and flexibly, with no need to have a fixed number of spectral bands, and, with no need to dispose of large, expensive and labeled training datasets. The proposed method achieves very good results on our datasets, outperforming both traditional and deep learning reference methods. The implementation of the proposed method can be found on https://github.com/giu-guarino/R-PN

    A method to better account for modulation transfer functions in ARSIS-based pansharpening methods

    No full text
    International audienceMultispectral (MS) images provided by Earth observation satellites have generally a poor spatial resolution while panchromatic images (PAN) exhibit a spatial resolution two or four times better. Data fusion is a means to synthesize MS images at higher spatial resolution than original by exploiting the high spatial resolution of the PAN. This process is often called pansharpening. The synthesis property states that the synthesized MS images should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. The methods based on the concept Amélioration de la Résolution Spatiale par Injection de Structures (ARSIS) are able to deliver synthesized images with good spectral quality but whose geometrical quality can still be improved. We propose a more precise definition of the synthesis property in terms of geometry. Then, we present a method that takes explicitly into account the difference in modulation transfer function (MTF) between PAN and MS in the fusion process. This method is applied to an existing ARSIS-based fusion method, i.e., A trou wavelet transform-model 3. Simulated images of the sensors Pleiades and SPOT-5 are used to illustrate the performances of the approach. Although this paper is limited in methods and data, we observe a better restitution of the geometry and an improvement in all indices classically used in quality budget in pansharpening. We present also a means to assess the respect of the synthesis property from an MTF point of view
    corecore