20,942 research outputs found

    Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks

    Get PDF
    Convolutional Neural Networks (CNNs) have been consistently proved state-of-the-art results in image Super-Resolution (SR), representing an exceptional opportunity for the remote sensing field to extract further information and knowledge from captured data. However, most of the works published in the literature have been focusing on the Single-Image Super-Resolution problem so far. At present, satellite based remote sensing platforms offer huge data availability with high temporal resolution and low spatial resolution. In this context, the presented research proposes a novel residual attention model (RAMS) that efficiently tackles the multi-image super-resolution task, simultaneously exploiting spatial and temporal correlations to combine multiple images. We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction of the multiple low-resolution images, transcending limitations of the local region of convolutional operations. Moreover, having multiple inputs with the same scene, our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals and focus the computation on more important high-frequency components. Extensive experimentation and evaluations against other available solutions, either for single or multi-image super-resolution, have demonstrated that the proposed deep learning-based solution can be considered state-of-the-art for Multi-Image Super-Resolution for remote sensing applications

    Deep learning based single image super-resolution : a survey

    Get PDF
    Single image super-resolution has attracted increasing attention and has a wide range of applications in satellite imaging, medical imaging, computer vision, security surveillance imaging, remote sensing, objection detection, and recognition. Recently, deep learning techniques have emerged and blossomed, producing “the state-of-the-art” in many domains. Due to their capability in feature extraction and mapping, it is very helpful to predict high-frequency details lost in low-resolution images. In this paper, we give an overview of recent advances in deep learning-based models and methods that have been applied to single image super-resolution tasks. We also summarize, compare and discuss various models from the past and present for comprehensive understanding and finally provide open problems and possible directions for future research

    Guided Depth Super-Resolution by Deep Anisotropic Diffusion

    Full text link
    Performing super-resolution of a depth image using the guidance from an RGB image is a problem that concerns several fields, such as robotics, medical imaging, and remote sensing. While deep learning methods have achieved good results in this problem, recent work highlighted the value of combining modern methods with more formal frameworks. In this work, we propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network and advances the state of the art for guided depth super-resolution. The edge transferring/enhancing properties of the diffusion are boosted by the contextual reasoning capabilities of modern networks, and a strict adjustment step guarantees perfect adherence to the source image. We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution. The performance gain compared to other methods is the largest at larger scales, such as x32 scaling. Code for the proposed method will be made available to promote reproducibility of our results

    Super-resolution land cover mapping by deep learning

    Get PDF
    Super-resolution mapping (SRM) is a technique to estimate a fine spatial resolution land cover map from coarse spatial resolution fractional proportion images. SRM is often based explicitly on the use of a spatial pattern model that represents the land cover mosaic at the fine spatial resolution. Recently developed deep learning methods have considerable potential as an alternative approach for SRM, based on learning the spatial pattern of land cover from existing fine resolution data such as land cover maps. This letter proposes a deep learning-based SRM algorithm (DeepSRM). A deep convolutional neural network was first trained to estimate a fine resolution indicator image for each class from the coarse resolution fractional image, and all indicator maps were then combined to create the final fine resolution land cover map based on the maximal value strategy. The results of an experiment undertaken with simulated images show that DeepSRM was superior to conventional hard classification and a suite of popular SRM algorithms, yielding the most accurate land cover representation. Consequently, methods such as DeepSRM may help exploit the potential of remote sensing as a source of accurate land cover information

    SRDA-Net: Super-Resolution Domain Adaptation Networks for Semantic Segmentation

    Full text link
    Recently, Unsupervised Domain Adaptation was proposed to address the domain shift problem in semantic segmentation task, but it may perform poor when source and target domains belong to different resolutions. In this work, we design a novel end-to-end semantic segmentation network, Super-Resolution Domain Adaptation Network (SRDA-Net), which could simultaneously complete super-resolution and domain adaptation. Such characteristics exactly meet the requirement of semantic segmentation for remote sensing images which usually involve various resolutions. Generally, SRDA-Net includes three deep neural networks: a Super-Resolution and Segmentation (SRS) model focuses on recovering high-resolution image and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the images from which domains; and output-space domain classifier (ODC) discriminates pixel label distributions from which domains. PDC and ODC are considered as the discriminators, and SRS is treated as the generator. By the adversarial learning, SRS tries to align the source with target domains on pixel-level visual appearance and output-space. Experiments are conducted on the two remote sensing datasets with different resolutions. SRDA-Net performs favorably against the state-of-the-art methods in terms of accuracy and visual quality. Code and models are available at https://github.com/tangzhenjie/SRDA-Net

    Repeat multiview panchromatic super-resolution restoration using the UCL MAGiGAN system

    Get PDF
    High spatial resolution imaging data is always considered desirable in the field of remote sensing, particularly Earth observation. However, given the physical constraints of the imaging instruments themselves, one needs to be able to trade-off spatial resolution against launch mass as well as telecommunications bandwidth for transmitting data back to the Earth. In this paper, we present a newly developed super-resolution restoration system, called MAGiGAN, based on our original GPT-SRR system combined with deep learning image networks to be able to restore up to 4x higher resolution enhancement using multi-angle repeat images as input
    • …
    corecore