216 research outputs found
SAR Target Image Generation Method Using Azimuth-Controllable Generative Adversarial Network
Sufficient synthetic aperture radar (SAR) target images are very important
for the development of researches. However, available SAR target images are
often limited in practice, which hinders the progress of SAR application. In
this paper, we propose an azimuth-controllable generative adversarial network
to generate precise SAR target images with an intermediate azimuth between two
given SAR images' azimuths. This network mainly contains three parts:
generator, discriminator, and predictor. Through the proposed specific network
structure, the generator can extract and fuse the optimal target features from
two input SAR target images to generate SAR target image. Then a similarity
discriminator and an azimuth predictor are designed. The similarity
discriminator can differentiate the generated SAR target images from the real
SAR images to ensure the accuracy of the generated, while the azimuth predictor
measures the difference of azimuth between the generated and the desired to
ensure the azimuth controllability of the generated. Therefore, the proposed
network can generate precise SAR images, and their azimuths can be controlled
well by the inputs of the deep network, which can generate the target images in
different azimuths to solve the small sample problem to some degree and benefit
the researches of SAR images. Extensive experimental results show the
superiority of the proposed method in azimuth controllability and accuracy of
SAR target image generation
Super-Resolution for Overhead Imagery Using DenseNets and Adversarial Learning
Recent advances in Generative Adversarial Learning allow for new modalities
of image super-resolution by learning low to high resolution mappings. In this
paper we present our work using Generative Adversarial Networks (GANs) with
applications to overhead and satellite imagery. We have experimented with
several state-of-the-art architectures. We propose a GAN-based architecture
using densely connected convolutional neural networks (DenseNets) to be able to
super-resolve overhead imagery with a factor of up to 8x. We have also
investigated resolution limits of these networks. We report results on several
publicly available datasets, including SpaceNet data and IARPA Multi-View
Stereo Challenge, and compare performance with other state-of-the-art
architectures.Comment: 9 pages, 9 figures, WACV 2018 submissio
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Deep learning methods applied to digital elevation models: state of the art
Deep Learning (DL) has a wide variety of applications in various
thematic domains, including spatial information. Although with
limitations, it is also starting to be considered in operations
related to Digital Elevation Models (DEMs). This study aims to
review the methods of DL applied in the field of altimetric spatial
information in general, and DEMs in particular. Void Filling (VF),
Super-Resolution (SR), landform classification and hydrography
extraction are just some of the operations where traditional methods
are being replaced by DL methods. Our review concludes
that although these methods have great potential, there are
aspects that need to be improved. More appropriate terrain information
or algorithm parameterisation are some of the challenges
that this methodology still needs to face.Functional Quality of Digital Elevation Models in Engineering’ of the State Agency Research of SpainPID2019-106195RB- I00/AEI/10.13039/50110001103
Deep Transductive Transfer Learning for Automatic Target Recognition
One of the major obstacles in designing an automatic target recognition (ATR)
algorithm, is that there are often labeled images in one domain (i.e., infrared
source domain) but no annotated images in the other target domains (i.e.,
visible, SAR, LIDAR). Therefore, automatically annotating these images is
essential to build a robust classifier in the target domain based on the
labeled images of the source domain. Transductive transfer learning is an
effective way to adapt a network to a new target domain by utilizing a
pretrained ATR network in the source domain. We propose an unpaired
transductive transfer learning framework where a CycleGAN model and a
well-trained ATR classifier in the source domain are used to construct an ATR
classifier in the target domain without having any labeled data in the target
domain. We employ a CycleGAN model to transfer the mid-wave infrared (MWIR)
images to visible (VIS) domain images (or visible to MWIR domain). To train the
transductive CycleGAN, we optimize a cost function consisting of the
adversarial, identity, cycle-consistency, and categorical cross-entropy loss
for both the source and target classifiers. In this paper, we perform a
detailed experimental analysis on the challenging DSIAC ATR dataset. The
dataset consists of ten classes of vehicles at different poses and distances
ranging from 1-5 kilometers on both the MWIR and VIS domains. In our
experiment, we assume that the images in the VIS domain are the unlabeled
target dataset. We first detect and crop the vehicles from the raw images and
then project them into a common distance of 2 kilometers. Our proposed
transductive CycleGAN achieves 71.56% accuracy in classifying the visible
domain vehicles in the DSIAC ATR dataset.Comment: 10 pages, 5 figure
- …