120 research outputs found

    Integrated GANs: Semi-Supervised SAR Target Recognition

    Get PDF
    With the advantage of working in all weathers and all days, synthetic aperture radar (SAR) imaging systems have a great application value. As an efficient image generation and recognition model, generative adversarial networks (GANs) have been applied to SAR image analysis and achieved promising performance. However, the cost of labeling a large number of SAR images limits the performance of the developed approaches and aggravates the mode collapsing problem. This paper presents a novel approach namely Integrated GANs (I-GAN), which consists of a conditional GANs, an unconditional GANs and a classifier, to achieve semi-supervised generation and recognition simultaneously. The unconditional GANs assist the conditional GANs to increase the diversity of the generated images. A co-training method for the conditional GANs and the classifier is proposed to enrich the training samples. Since our model is capable of representing training images with rich characteristics, the classifier can achieve better recognition accuracy. Experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset proves that our method achieves better results in accuracy when labeled samples are insufficient, compared against other state-of-the-art techniques

    Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery

    Get PDF
    This work has been accepted by IEEE TGRS for publication. The majority of optical observations acquired via spaceborne earth imagery are affected by clouds. While there is numerous prior work on reconstructing cloud-covered information, previous studies are oftentimes confined to narrowly-defined regions of interest, raising the question of whether an approach can generalize to a diverse set of observations acquired at variable cloud coverage or in different regions and seasons. We target the challenge of generalization by curating a large novel data set for training new cloud removal approaches and evaluate on two recently proposed performance metrics of image quality and diversity. Our data set is the first publically available to contain a global sample of co-registered radar and optical observations, cloudy as well as cloud-free. Based on the observation that cloud coverage varies widely between clear skies and absolute coverage, we propose a novel model that can deal with either extremes and evaluate its performance on our proposed data set. Finally, we demonstrate the superiority of training models on real over synthetic data, underlining the need for a carefully curated data set of real observations. To facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio

    Spectral-Spatial Neural Networks and Probabilistic Graph Models for Hyperspectral Image Classification

    Get PDF
    Pixel-wise hyperspectral image (HSI) classification has been actively studied since it shares similar characteristics with related computer vision tasks, including image classification, object detection, and semantic segmentation, but also possesses inherent differences. The research surrounding HSI classification sheds light on an approach to bridge computer vision and remote sensing. Modern deep neural networks dominate and repeatedly set new records in all image recognition challenges, largely due to their excellence in extracting discriminative features through multi-layer nonlinear transformation. However, three challenges hinder the direct adoption of convolutional neural networks (CNNs) for HSI classification. First, typical HSIs contain hundreds of spectral channels that encode abundant pixel-wise spectral information, leading to the curse of dimensionality. Second, HSIs usually have relatively small numbers of annotated pixels for training along with large numbers of unlabeled pixels, resulting in the problem of generalization. Third, the scarcity of annotations and the complexity of HSI data induce noisy classification maps, which are a common issue in various types of remotely sensed data interpretation. Recent studies show that taking the data attributes into the designing of fundamental components of deep neural networks can improve their representational capacity and then facilitates these models to achieve better recognition performance. To the best of our knowledge, no research has exploited this finding or proposed corresponding models for supervised HSI classification given enough labeled HSI data. In cases of limited labeled HSI samples for training, conditional random fields (CRFs) are an effective graph model to impose data-agnostic constraints upon the intermediate outputs of trained discriminators. Although CRFs have been widely used to enhance HSI classification performance, the integration of deep learning and probabilistic graph models in the framework of semi-supervised learning remains an open question. To this end, this thesis presents supervised spectral-spatial residual networks (SSRNs) and semi-supervised generative adversarial network (GAN) -based models that account for the characteristics of HSIs and make three main contributions. First, spectral and spatial convolution layers are introduced to learn representative HSI features for supervised learning models. Second, generative adversarial networks (GANs) composed of spectral/spatial convolution and transposed-convolution layers are proposed to take advantage of adversarial training using limited amounts of labeled data for semi-supervised learning. Third, fully-connected CRFs are adopted to impose smoothness constraints on the predictions of the trained discriminators of GANs to enhance HSI classification performance. Empirical evidence acquired by experimental comparison to state-of-the-art models validates the effectiveness and generalizability of SSRN, SS-GAN, and GAN-CRF models

    Spectral-Spatial Neural Networks and Probabilistic Graph Models for Hyperspectral Image Classification

    Get PDF
    Pixel-wise hyperspectral image (HSI) classification has been actively studied since it shares similar characteristics with related computer vision tasks, including image classification, object detection, and semantic segmentation, but also possesses inherent differences. The research surrounding HSI classification sheds light on an approach to bridge computer vision and remote sensing. Modern deep neural networks dominate and repeatedly set new records in all image recognition challenges, largely due to their excellence in extracting discriminative features through multi-layer nonlinear transformation. However, three challenges hinder the direct adoption of convolutional neural networks (CNNs) for HSI classification. First, typical HSIs contain hundreds of spectral channels that encode abundant pixel-wise spectral information, leading to the curse of dimensionality. Second, HSIs usually have relatively small numbers of annotated pixels for training along with large numbers of unlabeled pixels, resulting in the problem of generalization. Third, the scarcity of annotations and the complexity of HSI data induce noisy classification maps, which are a common issue in various types of remotely sensed data interpretation. Recent studies show that taking the data attributes into the designing of fundamental components of deep neural networks can improve their representational capacity and then facilitates these models to achieve better recognition performance. To the best of our knowledge, no research has exploited this finding or proposed corresponding models for supervised HSI classification given enough labeled HSI data. In cases of limited labeled HSI samples for training, conditional random fields (CRFs) are an effective graph model to impose data-agnostic constraints upon the intermediate outputs of trained discriminators. Although CRFs have been widely used to enhance HSI classification performance, the integration of deep learning and probabilistic graph models in the framework of semi-supervised learning remains an open question. To this end, this thesis presents supervised spectral-spatial residual networks (SSRNs) and semi-supervised generative adversarial network (GAN) -based models that account for the characteristics of HSIs and make three main contributions. First, spectral and spatial convolution layers are introduced to learn representative HSI features for supervised learning models. Second, generative adversarial networks (GANs) composed of spectral/spatial convolution and transposed-convolution layers are proposed to take advantage of adversarial training using limited amounts of labeled data for semi-supervised learning. Third, fully-connected CRFs are adopted to impose smoothness constraints on the predictions of the trained discriminators of GANs to enhance HSI classification performance. Empirical evidence acquired by experimental comparison to state-of-the-art models validates the effectiveness and generalizability of SSRN, SS-GAN, and GAN-CRF models

    SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks - Optimization, Opportunities and Limits

    Get PDF
    Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Weakly Supervised Segmentation of SAR Imagery Using Superpixel and Hierarchically Adversarial CRF

    Get PDF
    Synthetic aperture radar (SAR) image segmentation aims at generating homogeneous regions from a pixel-based image and is the basis of image interpretation. However, most of the existing segmentation methods usually neglect the appearance and spatial consistency during feature extraction and also require a large number of training data. In addition, pixel-based processing cannot meet the real time requirement. We hereby present a weakly supervised algorithm to perform the task of segmentation for high-resolution SAR images. For effective segmentation, the input image is first over-segmented into a set of primitive superpixels. This algorithm combines hierarchical conditional generative adversarial nets (CGAN) and conditional random fields (CRF). The CGAN-based networks can leverage abundant unlabeled data learning parameters, reducing their reliance on the labeled samples. In order to preserve neighborhood consistency in the feature extraction stage, the hierarchical CGAN is composed of two sub-networks, which are employed to extract the information of the central superpixels and the corresponding background superpixels, respectively. Afterwards, CRF is utilized to perform label optimization using the concatenated features. Quantified experiments on an airborne SAR image dataset prove that the proposed method can effectively learn feature representations and achieve competitive accuracy to the state-of-the-art segmentation approaches. More specifically, our algorithm has a higher Cohen’s kappa coefficient and overall accuracy. Its computation time is less than the current mainstream pixel-level semantic segmentation networks
    • …
    corecore