2,259 research outputs found
Effective Use of Dilated Convolutions for Segmenting Small Object Instances in Remote Sensing Imagery
Thanks to recent advances in CNNs, solid improvements have been made in
semantic segmentation of high resolution remote sensing imagery. However, most
of the previous works have not fully taken into account the specific
difficulties that exist in remote sensing tasks. One of such difficulties is
that objects are small and crowded in remote sensing imagery. To tackle with
this challenging task we have proposed a novel architecture called local
feature extraction (LFE) module attached on top of dilated front-end module.
The LFE module is based on our findings that aggressively increasing dilation
factors fails to aggregate local features due to sparsity of the kernel, and
detrimental to small objects. The proposed LFE module solves this problem by
aggregating local features with decreasing dilation factor. We tested our
network on three remote sensing datasets and acquired remarkably good results
for all datasets especially for small objects
Mapping solar array location, size, and capacity using deep learning and overhead imagery
The effective integration of distributed solar photovoltaic (PV) arrays into
existing power grids will require access to high quality data; the location,
power capacity, and energy generation of individual solar PV installations.
Unfortunately, existing methods for obtaining this data are limited in their
spatial resolution and completeness. We propose a general framework for
accurately and cheaply mapping individual PV arrays, and their capacities, over
large geographic areas. At the core of this approach is a deep learning
algorithm called SolarMapper - which we make publicly available - that can
automatically map PV arrays in high resolution overhead imagery. We estimate
the performance of SolarMapper on a large dataset of overhead imagery across
three US cities in California. We also describe a procedure for deploying
SolarMapper to new geographic regions, so that it can be utilized by others. We
demonstrate the effectiveness of the proposed deployment procedure by using it
to map solar arrays across the entire US state of Connecticut (CT). Using these
results, we demonstrate that we achieve highly accurate estimates of total
installed PV capacity within each of CT's 168 municipal regions
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Learning Aerial Image Segmentation from Online Maps
This study deals with semantic segmentation of high-resolution (aerial)
images where a semantic class label is assigned to each pixel via supervised
classification as a basis for automatic map generation. Recently, deep
convolutional neural networks (CNNs) have shown impressive performance and have
quickly become the de-facto standard for semantic segmentation, with the added
benefit that task-specific feature design is no longer necessary. However, a
major downside of deep learning methods is that they are extremely data-hungry,
thus aggravating the perennial bottleneck of supervised classification, to
obtain enough annotated training data. On the other hand, it has been observed
that they are rather robust against noise in the training labels. This opens up
the intriguing possibility to avoid annotating huge amounts of training data,
and instead train the classifier from existing legacy data or crowd-sourced
maps which can exhibit high levels of noise. The question addressed in this
paper is: can training with large-scale, publicly available labels replace a
substantial part of the manual labeling effort and still achieve sufficient
performance? Such data will inevitably contain a significant portion of errors,
but in return virtually unlimited quantities of it are available in larger
parts of the world. We adapt a state-of-the-art CNN architecture for semantic
segmentation of buildings and roads in aerial images, and compare its
performance when using different training data sets, ranging from manually
labeled, pixel-accurate ground truth of the same city to automatic training
data derived from OpenStreetMap data from distant locations. We report our
results that indicate that satisfying performance can be obtained with
significantly less manual annotation effort, by exploiting noisy large-scale
training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN
- …