23 research outputs found

    Spatiotemporal Fusion of Land Surface Temperature Based on a Convolutional Neural Network

    Get PDF
    © 1980-2012 IEEE. Due to the tradeoff between spatial and temporal resolutions commonly encountered in remote sensing, no single satellite sensor can provide fine spatial resolution land surface temperature (LST) products with frequent coverage. This situation greatly limits applications that require LST data with fine spatiotemporal resolution. Here, a deep learning-based spatiotemporal temperature fusion network (STTFN) method for the generation of fine spatiotemporal resolution LST products is proposed. In STTFN, a multiscale fusion convolutional neural network is employed to build the complex nonlinear relationship between input and output LSTs. Thus, unlike other LST spatiotemporal fusion approaches, STTFN is able to form the potentially complicated relationships through the use of training data without manually designed mathematical rules making it is more flexible and intelligent than other methods. In addition, two target fine spatial resolution LST images are predicted and then integrated by a spatiotemporal-consistency (STC)-weighting function to take advantage of STC of LST data. A set of analyses using two real LST data sets obtained from Landsat and moderate resolution imaging spectroradiometer (MODIS) were undertaken to evaluate the ability of STTFN to generate fine spatiotemporal resolution LST products. The results show that, compared with three classic fusion methods [the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), the spatiotemporal integrated temperature fusion model (STITFM), and the two-stream convolutional neural network for spatiotemporal image fusion (StfNet)], the proposed network produced the most accurate outputs [average root mean square error (RMSE) 0.971]

    A Latent Encoder Coupled Generative Adversarial Network (LE-GAN) for Efficient Hyperspectral Image Super-resolution

    Get PDF
    Realistic hyperspectral image (HSI) super-resolution (SR) techniques aim to generate a high-resolution (HR) HSI with higher spectral and spatial fidelity from its low-resolution (LR) counterpart. The generative adversarial network (GAN) has proven to be an effective deep learning framework for image super-resolution. However, the optimisation process of existing GAN-based models frequently suffers from the problem of mode collapse, leading to the limited capacity of spectral-spatial invariant reconstruction. This may cause the spectral-spatial distortion on the generated HSI, especially with a large upscaling factor. To alleviate the problem of mode collapse, this work has proposed a novel GAN model coupled with a latent encoder (LE-GAN), which can map the generated spectral-spatial features from the image space to the latent space and produce a coupling component to regularise the generated samples. Essentially, we treat an HSI as a high-dimensional manifold embedded in a latent space. Thus, the optimisation of GAN models is converted to the problem of learning the distributions of high-resolution HSI samples in the latent space, making the distributions of the generated super-resolution HSIs closer to those of their original high-resolution counterparts. We have conducted experimental evaluations on the model performance of super-resolution and its capability in alleviating mode collapse. The proposed approach has been tested and validated based on two real HSI datasets with different sensors (i.e. AVIRIS and UHD-185) for various upscaling factors and added noise levels, and compared with the state-of-the-art super-resolution models (i.e. HyCoNet, LTTR, BAGAN, SR- GAN, WGAN).Comment: 18 pages, 10 figure

    Guided Nonlocal Patch Regularization and Efficient Filtering-Based Inversion for Multiband Fusion

    Full text link
    In multiband fusion, an image with a high spatial and low spectral resolution is combined with an image with a low spatial but high spectral resolution to produce a single multiband image having high spatial and spectral resolutions. This comes up in remote sensing applications such as pansharpening~(MS+PAN), hyperspectral sharpening~(HS+PAN), and HS-MS fusion~(HS+MS). Remote sensing images are textured and have repetitive structures. Motivated by nonlocal patch-based methods for image restoration, we propose a convex regularizer that (i) takes into account long-distance correlations, (ii) penalizes patch variation, which is more effective than pixel variation for capturing texture information, and (iii) uses the higher spatial resolution image as a guide image for weight computation. We come up with an efficient ADMM algorithm for optimizing the regularizer along with a standard least-squares loss function derived from the imaging model. The novelty of our algorithm is that by expressing patch variation as filtering operations and by judiciously splitting the original variables and introducing latent variables, we are able to solve the ADMM subproblems efficiently using FFT-based convolution and soft-thresholding. As far as the reconstruction quality is concerned, our method is shown to outperform state-of-the-art variational and deep learning techniques.Comment: Accepted in IEEE Transactions on Computational Imagin

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Development and implementation of image fusion algorithms based on wavelets

    Get PDF
    Image fusion is a process of blending the complementary as well as the common features of a set of images, to generate a resultant image with superior information content in terms of subjective as well as objective analysis point of view. The objective of this research work is to develop some novel image fusion algorithms and their applications in various fields such as crack detection, multi spectra sensor image fusion, medical image fusion and edge detection of multi-focus images etc. The first part of this research work deals with a novel crack detection technique based on Non-Destructive Testing (NDT) for cracks in walls suppressing the diversity and complexity of wall images. It follows different edge tracking algorithms such as Hyperbolic Tangent (HBT) filtering and canny edge detection algorithm. The second part of this research work deals with a novel edge detection approach for multi-focused images by means of complex wavelets based image fusion. An illumination invariant hyperbolic tangent filter (HBT) is applied followed by an adaptive thresholding to get the real edges. The shift invariance and directionally selective diagonal filtering as well as the ease of implementation of Dual-Tree Complex Wavelet Transform (DT-CWT) ensure robust sub band fusion. It helps in avoiding the ringing artefacts that are more pronounced in Discrete Wavelet Transform (DWT). The fusion using DT-CWT also solves the problem of low contrast and blocking effects. In the third part, an improved DT-CWT based image fusion technique has been developed to compose a resultant image with better perceptual as well as quantitative image quality indices. A bilateral sharpness based weighting scheme has been implemented for the high frequency coefficients taking both gradient and its phase coherence in accoun

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

    Get PDF
    Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area
    corecore