551 research outputs found

    Impact of Feature Representation on Remote Sensing Image Retrieval

    Get PDF
    Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task.  Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Spectral Super-Resolution of Satellite Imagery with Generative Adversarial Networks

    Get PDF
    Hyperspectral (HS) data is the most accurate interpretation of surface as it provides fine spectral information with hundreds of narrow contiguous bands as compared to multispectral (MS) data whose bands cover bigger wavelength portions of the electromagnetic spectrum. This difference is noticeable in applications such as agriculture, geosciences, astronomy, etc. However, HS sensors lack on earth observing spacecraft due to its high cost. In this study, we propose a novel loss function for generative adversarial networks as a spectral-oriented and general-purpose solution to spectral super-resolution of satellite imagery. The proposed architecture learns mapping from MS to HS data, generating nearly 20x more bands than the given input. We show that we outperform the state-of-the-art methods by visual interpretation and statistical metrics.Les dades hiperspectrals (HS) són la interpretació més precisa de la superfície, ja que proporciona informació espectral fina amb centenars de bandes contigües estretes en comparació amb les dades multiespectrals (MS) les bandes cobreixen parts de longitud d'ona més grans de l'espectre electromagnètic. Aquesta diferència és notable en àmbits com l'agricultura, les geociències, l'astronomia, etc. No obstant això, els sensors HS manquen als satèl·lits d'observació terrestre a causa del seu elevat cost. En aquest estudi proposem una nova funció de cost per a Generative Adversarial Networks com a solució orientada a l'espectre i de propòsit general per la superresolució espectral d'imatges de satèl·lit. L'arquitectura proposada aprèn el mapatge de dades MS a HS, generant gairebé 20x més bandes que l'entrada donada. Mostrem que superem els mètodes state-of-the-art mitjançant la interpretació visual i les mètriques estadístiques.Los datos hiperspectral (HS) son la interpretación más precisa de la superficie, ya que proporciona información espectral fina con cientos de bandas contiguas estrechas en comparación con los datos multiespectrales (MS) cuyas bandas cubren partes de longitud de onda más grandes del espectro electromagnético. Esta diferencia es notable en ámbitos como la agricultura, las geociencias, la astronomía, etc. Sin embargo, los sensores HS escasean en los satélites de observación terrestre debido a su elevado coste. En este estudio proponemos una nueva función de coste para Generative Adversarial Networks como solución orientada al espectro y de propósito general para la super-resolución espectral de imágenes de satélite. La arquitectura propuesta aprende el mapeo de datos MS a HS, generando casi 20x más bandas que la entrada dada. Mostramos que superamos los métodos state-of-the-art mediante la interpretación visual y las métricas estadísticas

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    Unsupervised Hyperspectral and Multispectral Images Fusion Based on the Cycle Consistency

    Full text link
    Hyperspectral images (HSI) with abundant spectral information reflected materials property usually perform low spatial resolution due to the hardware limits. Meanwhile, multispectral images (MSI), e.g., RGB images, have a high spatial resolution but deficient spectral signatures. Hyperspectral and multispectral image fusion can be cost-effective and efficient for acquiring both high spatial resolution and high spectral resolution images. Many of the conventional HSI and MSI fusion algorithms rely on known spatial degradation parameters, i.e., point spread function, spectral degradation parameters, spectral response function, or both of them. Another class of deep learning-based models relies on the ground truth of high spatial resolution HSI and needs large amounts of paired training images when working in a supervised manner. Both of these models are limited in practical fusion scenarios. In this paper, we propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion. The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI), and the desired high spatial resolution HSI (HrHSI) are considered to be intermediate feature maps in the transformation networks. The CycFusion can be trained with the objective functions of marginal matching in single transform and cycle consistency in double transforms. Moreover, the estimated PSF and SRF are embedded in the model as the pre-training weights, which further enhances the practicality of our proposed model. Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods. The codes of this paper will be available at this address: https: //github.com/shuaikaishi/CycFusion for reproducibility

    A Spectral Diffusion Prior for Hyperspectral Image Super-Resolution

    Full text link
    Fusion-based hyperspectral image (HSI) super-resolution aims to produce a high-spatial-resolution HSI by fusing a low-spatial-resolution HSI and a high-spatial-resolution multispectral image. Such a HSI super-resolution process can be modeled as an inverse problem, where the prior knowledge is essential for obtaining the desired solution. Motivated by the success of diffusion models, we propose a novel spectral diffusion prior for fusion-based HSI super-resolution. Specifically, we first investigate the spectrum generation problem and design a spectral diffusion model to model the spectral data distribution. Then, in the framework of maximum a posteriori, we keep the transition information between every two neighboring states during the reverse generative process, and thereby embed the knowledge of trained spectral diffusion model into the fusion problem in the form of a regularization term. At last, we treat each generation step of the final optimization problem as its subproblem, and employ the Adam to solve these subproblems in a reverse sequence. Experimental results conducted on both synthetic and real datasets demonstrate the effectiveness of the proposed approach. The code of the proposed approach will be available on https://github.com/liuofficial/SDP
    • …
    corecore