771 research outputs found

    Image Diversification via Deep Learning based Generative Models

    Get PDF
    Machine learning driven pattern recognition from imagery such as object detection has been prevalenting among society due to the high demand for autonomy and the recent remarkable advances in such technology. The machine learning technologies acquire the abstraction of the existing data and enable inference of the pattern of the future inputs. However, such technologies require a sheer amount of images as a training dataset which well covers the distribution of the future inputs in order to predict the proper patterns whereas it is impracticable to prepare enough variety of images in many cases. To address this problem, this thesis pursues to discover the method to diversify image datasets for fully enabling the capability of machine learning driven applications. Focusing on the plausible image synthesis ability of generative models, we investigate a number of approaches to expand the variety of the output images using image-to-image translation, mixup and diffusion models along with the technique to enable a computation and training dataset efficient diffusion approach. First, we propose the combined use of unpaired image-to-image translation and mixup for data augmentation on limited non-visible imagery. Second, we propose diffusion image-to-image translation that generates greater quality images than other previous adversarial training based translation methods. Third, we propose a patch-wise and discrete conditional training of diffusion method enabling the reduction of the computation and the robustness on small training datasets. Subsequently, we discuss a remaining open challenge about evaluation and the direction of future work. Lastly, we make an overall conclusion after stating social impact of this research field

    The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion

    Get PDF
    While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the SEN1-2 dataset to foster deep learning research in SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since SEN1-2 is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (online from October 2018

    Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Data Augmentation and Classification of Sea-Land Clutter for Over-the-Horizon Radar Using AC-VAEGAN

    Full text link
    In the sea-land clutter classification of sky-wave over-the-horizon-radar (OTHR), the imbalanced and scarce data leads to a poor performance of the deep learning-based classification model. To solve this problem, this paper proposes an improved auxiliary classifier generative adversarial network~(AC-GAN) architecture, namely auxiliary classifier variational autoencoder generative adversarial network (AC-VAEGAN). AC-VAEGAN can synthesize higher quality sea-land clutter samples than AC-GAN and serve as an effective tool for data augmentation. Specifically, a 1-dimensional convolutional AC-VAEGAN architecture is designed to synthesize sea-land clutter samples. Additionally, an evaluation method combining both traditional evaluation of GAN domain and statistical evaluation of signal domain is proposed to evaluate the quality of synthetic samples. Using a dataset of OTHR sea-land clutter, both the quality of the synthetic samples and the performance of data augmentation of AC-VAEGAN are verified. Further, the effect of AC-VAEGAN as a data augmentation method on the classification performance of imbalanced and scarce sea-land clutter samples is validated. The experiment results show that the quality of samples synthesized by AC-VAEGAN is better than that of AC-GAN, and the data augmentation method with AC-VAEGAN is able to improve the classification performance in the case of imbalanced and scarce sea-land clutter samples.Comment: 13 pages, 16 figure

    AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network

    Get PDF
    This work is licensed under a Creative Commons Attribution 4.0 International License.Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network
    • …
    corecore