395 research outputs found

    Image super-resolution with dense-sampling residual channel-spatial attention networks for multi-temporal remote sensing image classification

    Get PDF
    Image super-resolution (SR) techniques can benefit a wide range of applications in the remote sensing (RS) community, including image classification. This issue is particularly relevant for image classification on time series data, considering RS datasets that feature long temporal coverage generally have a limited spatial resolution. Recent advances in deep learning brought new opportunities for enhancing the spatial resolution of historic RS data. Numerous convolutional neural network (CNN)-based methods showed superior performance in terms of developing efficient end-to-end SR models for natural images. However, such models were rarely exploited for promoting image classification based on multispectral RS data. This paper proposes a novel CNNbased framework to enhance the spatial resolution of time series multispectral RS images. Thereby, the proposed SR model employs Residual Channel Attention Networks (RCAN) as a backbone structure, whereas based on this structure the proposed models uniquely integrate tailored channel-spatial attention and dense-sampling mechanisms for performance improvement. Subsequently, state-of-the-art CNN-based classifiers are incorporated to produce classification maps based on the enhanced time series data. The experiments proved that the proposed SR model can enable unambiguously better performance compared to RCAN and other (deep learning-based) SR techniques, especially in a domain adaptation context, i.e., leveraging Sentinel-2 images for generating SR Landsat images. Furthermore, the experimental results confirmed that the enhanced multi-temporal RS images can bring substantial improvement on fine-grained multi-temporal land use classification

    Analysis of deep learning architectures for turbulence mitigation in long-range imagery

    Get PDF
    In long range imagery, the atmosphere along the line of sight can result in unwanted visual effects. Random variations in the refractive index of the air causes light to shift and distort. When captured by a camera, this randomly induced variation results in blurred and spatially distorted images. The removal of such effects is greatly desired. Many traditional methods are able to reduce the effects of turbulence within images, however they require complex optimisation procedures or have large computational complexity. The use of deep learning for image processing has now become commonplace, with neural networks being able to outperform traditional methods in many fields. This paper presents an evaluation of various deep learning architectures on the task of turbulence mitigation. The core disadvantage of deep learning is the dependence on a large quantity of relevant data. For the task of turbulence mitigation, real life data is difficult to obtain, as a clean undistorted image is not always obtainable. Turbulent images were therefore generated with the use of a turbulence simulator. This was able to accurately represent atmospheric conditions and apply the resulting spatial distortions onto clean images. This paper provides a comparison between current state of the art image reconstruction convolutional neural networks. Each network is trained on simulated turbulence data. They are then assessed on a series of test images. It is shown that the networks are unable to provide high quality output images. However, they are shown to be able to reduce the effects of spatial warping within the test images. This paper provides critical analysis into the effectiveness of the application of deep learning. It is shown that deep learning has potential in this field, and can be used to make further improvements in the future

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement

    Unsupervised MRI Super-Resolution Using Deep External Learning and Guided Residual Dense Network with Multimodal Image Priors

    Full text link
    Deep learning techniques have led to state-of-the-art single image super-resolution (SISR) with natural images. Pairs of high-resolution (HR) and low-resolution (LR) images are used to train the deep learning model (mapping function). These techniques have also been applied to medical image super-resolution (SR). Compared with natural images, medical images have several unique characteristics. First, there are no HR images for training in real clinical applications because of the limitations of imaging systems and clinical requirements. Second, other modal HR images are available (e.g., HR T1-weighted images are available for enhancing LR T2-weighted images). In this paper, we propose an unsupervised SISR technique based on simple prior knowledge of the human anatomy; this technique does not require HR images for training. Furthermore, we present a guided residual dense network, which incorporates a residual dense network with a guided deep convolutional neural network for enhancing the resolution of LR images by referring to different HR images of the same subject. Experiments on a publicly available brain MRI database showed that our proposed method achieves better performance than the state-of-the-art methods.Comment: 10 pages, 3 figure
    corecore