5,572 research outputs found

    Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution

    Full text link
    In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018, Spotlight

    Super-resolving multiresolution images with band-independant geometry of multispectral pixels

    Get PDF
    A new resolution enhancement method is presented for multispectral and multi-resolution images, such as these provided by the Sentinel-2 satellites. Starting from the highest resolution bands, band-dependent information (reflectance) is separated from information that is common to all bands (geometry of scene elements). This model is then applied to unmix low-resolution bands, preserving their reflectance, while propagating band-independent information to preserve the sub-pixel details. A reference implementation is provided, with an application example for super-resolving Sentinel-2 data.Comment: Source code with a ready-to-use script for super-resolving Sentinel-2 data is available at http://nicolas.brodu.net/recherche/superres

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    Pan-sharpening of WorldView-2 images with deep learning

    Get PDF
    In recent years the exponential growth on Deep Learning interest has had a huge impact on improving the resolution of images. In particular, enhancing the quality of remote sensing imagery is a field where many models have been proposed by different researchers. One of this approaches is pan-sharpening, which takes advantage from the satellites imagery pairs in order to raise the resolution of multispectral or hyperspectral images. In this project, a model from the literature will be adapted for WorldView-2 satellite imagery and modified to improve the current stated results from the model. Experiments results will be compared between the adapted model and the modified one so the adjustments effectiveness can be proven.En los últimos años el incremento exponencial del interés por el Aprendizaje Profundo ha tenido un gran impacto en la mejora de la resolución de imágenes. En particular, enriquecer la calidad de las imágenes captadas con teledetección es un campo donde distintos investigadores han propuesto varios modelos. Uno de estos enfoques es el pan-sharpening, que aprovecha los pares de imágenes de los satélites para incrementar la resolución de imágenes multiespectrales o hiperespectrales. En este proyecto, un modelo de la literatura se adaptará para imágenes del satélite WorldView-2 y será modificado para mejorar los resultados establecidos en la actualidad por el modelo. Los resultados de los experimentos se compararán entre el modelo adaptado y el modelo modificado para verificar que los cambios realizados son efectivos.En els darrers anys l'increment exponencial de l'interès per l'Aprenentatge Profund ha tingut un gran impacte en la millora de la resolució d'imatges. En particular, enriquir la qualitat de les imatges captades per teledetecció és un camp diferents investigadors han proposat diversos models. Un d'aquests enfocaments és el pan-sharpening, que aprofita els parells d'imatges dels satél·lits per incrementar la resolució d'imatges multiespectrals o hiperespectrals. En aquest projecte, un model de la literatura s'adaptará per a imatges del satél·lit WorldView-2 i será modificat per millorar els resultats establerts pel model actualment. Els resultats dels experiments es compararan entre el model adaptat i el model modificat per tal de verificar l'efectivitat del canvis realitzats

    Data fusion: taking into account the modulation transfer function in ARSIS-based pansharpening methods

    No full text
    International audienceMultispectral images provided by satellite have a poor spatial resolution while panchromatic images (PAN) exhibit a spatial resolution two or four times better. Data fusion is a mean to synthesize MS images at higher spatial resolution than original by exploiting the high spatial resolution of the PAN. This process is often called pan-sharpening. The synthesized multispectral images should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. The methods based on the concept “Amélioration de la Résolution Spatiale par Injection de Structures” (ARSIS) concept are able to deliver synthesized images with good spectral quality but whose geometrical quality can still be enhanced. We propose to consider the characteristics of the sensor to improve the geometrical quality. We take explicitly into account the modulation transfer function (MTF) of the sensor in the fusion process. Though this study is limited in methods and data, we observe a better restitution of the geometry and an improvement in the majority of quality indices classically used in pan-sharpening. The communication also presents a means to assess the respect of the synthesis property from a MTF point of view

    W-NetPan: Double-U network for inter-sensor self-supervised pan-sharpening

    Get PDF
    The increasing availability of remote sensing data allows dealing with spatial-spectral limitations by means of pan-sharpening methods. However, fusing inter-sensor data poses important challenges, in terms of resolution differences, sensor-dependent deformations and ground-truth data availability, that demand more accurate pan-sharpening solutions. In response, this paper proposes a novel deep learning-based pan-sharpening model which is termed as the double-U network for self-supervised pan-sharpening (W-NetPan). In more details, the proposed architecture adopts an innovative W-shape that integrates two U-Net segments which sequentially work for spatially matching and fusing inter-sensor multi-modal data. In this way, a synergic effect is produced where the first segment resolves inter-sensor deviations while stimulating the second one to achieve a more accurate data fusion. Additionally, a joint loss formulation is proposed for effectively training the proposed model without external data supervision. The experimental comparison, conducted over four coupled Sentinel-2 and Sentinel-3 datasets, reveals the advantages of W-NetPan with respect to several of the most important state-of-the-art pan-sharpening methods available in the literature. The codes related to this paper will be available at https://github.com/rufernan/WNetPan
    corecore