8,933 research outputs found
Map Generation from Large Scale Incomplete and Inaccurate Data Labels
Accurately and globally mapping human infrastructure is an important and
challenging task with applications in routing, regulation compliance
monitoring, and natural disaster response management etc.. In this paper we
present progress in developing an algorithmic pipeline and distributed compute
system that automates the process of map creation using high resolution aerial
images. Unlike previous studies, most of which use datasets that are available
only in a few cities across the world, we utilizes publicly available imagery
and map data, both of which cover the contiguous United States (CONUS). We
approach the technical challenge of inaccurate and incomplete training data
adopting state-of-the-art convolutional neural network architectures such as
the U-Net and the CycleGAN to incrementally generate maps with increasingly
more accurate and more complete labels of man-made infrastructure such as roads
and houses. Since scaling the mapping task to CONUS calls for parallelization,
we then adopted an asynchronous distributed stochastic parallel gradient
descent training scheme to distribute the computational workload onto a cluster
of GPUs with nearly linear speed-up.Comment: This paper is accepted by KDD 202
Compressively Sensed Image Recognition
Compressive Sensing (CS) theory asserts that sparse signal reconstruction is
possible from a small number of linear measurements. Although CS enables
low-cost linear sampling, it requires non-linear and costly reconstruction.
Recent literature works show that compressive image classification is possible
in CS domain without reconstruction of the signal. In this work, we introduce a
DCT base method that extracts binary discriminative features directly from CS
measurements. These CS measurements can be obtained by using (i) a random or a
pseudo-random measurement matrix, or (ii) a measurement matrix whose elements
are learned from the training data to optimize the given classification task.
We further introduce feature fusion by concatenating Bag of Words (BoW)
representation of our binary features with one of the two state-of-the-art
CNN-based feature vectors. We show that our fused feature outperforms the
state-of-the-art in both cases.Comment: 6 pages, submitted/accepted, EUVIP 201
Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network
The detection performance of small objects in remote sensing images is not
satisfactory compared to large objects, especially in low-resolution and noisy
images. A generative adversarial network (GAN)-based model called enhanced
super-resolution GAN (ESRGAN) shows remarkable image enhancement performance,
but reconstructed images miss high-frequency edge information. Therefore,
object detection performance degrades for small objects on recovered noisy and
low-resolution remote sensing images. Inspired by the success of edge enhanced
GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN
(EESRGAN) to improve the image quality of remote sensing images and use
different detector networks in an end-to-end manner where detector loss is
backpropagated into the EESRGAN to improve the detection performance. We
propose an architecture with three components: ESRGAN, Edge Enhancement Network
(EEN), and Detection network. We use residual-in-residual dense blocks (RRDB)
for both the ESRGAN and EEN, and for the detector network, we use the faster
region-based convolutional network (FRCNN) (two-stage detector) and single-shot
multi-box detector (SSD) (one stage detector). Extensive experiments on a
public (car overhead with context) and a self-assembled (oil and gas storage
tank) satellite dataset show superior performance of our method compared to the
standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI
remote sensing journal. GitHub Repository:
https://github.com/Jakaria08/EESRGAN (Implementation
- …