22,944 research outputs found
State-of-the-art and gaps for deep learning on limited training data in remote sensing
Deep learning usually requires big data, with respect to both volume and
variety. However, most remote sensing applications only have limited training
data, of which a small subset is labeled. Herein, we review three
state-of-the-art approaches in deep learning to combat this challenge. The
first topic is transfer learning, in which some aspects of one domain, e.g.,
features, are transferred to another domain. The next is unsupervised learning,
e.g., autoencoders, which operate on unlabeled data. The last is generative
adversarial networks, which can generate realistic looking data that can fool
the likes of both a deep learning network and human. The aim of this article is
to raise awareness of this dilemma, to direct the reader to existing work and
to highlight current gaps that need solving.Comment: arXiv admin note: text overlap with arXiv:1709.0030
Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks
Detecting and classifying targets in video streams from surveillance cameras
is a cumbersome, error-prone and expensive task. Often, the incurred costs are
prohibitive for real-time monitoring. This leads to data being stored locally
or transmitted to a central storage site for post-incident examination. The
required communication links and archiving of the video data are still
expensive and this setup excludes preemptive actions to respond to imminent
threats. An effective way to overcome these limitations is to build a smart
camera that transmits alerts when relevant video sequences are detected. Deep
neural networks (DNNs) have come to outperform humans in visual classifications
tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be
extended to make use of higher-dimensional input data such as multispectral
data. We explore this opportunity in terms of achievable accuracy and required
computational effort. To analyze the precision of DNNs for scene labeling in an
urban surveillance scenario we have created a dataset with 8 classes obtained
in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR
snapshot sensor to assess the potential of multispectral image data for target
classification. We evaluate several new DNNs, showing that the spectral
information fused together with the RGB frames can be used to improve the
accuracy of the system or to achieve similar accuracy with a 3x smaller
computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even
for scarcely occurring, but particularly interesting classes, such as cars, 75%
of the pixels are labeled correctly with errors occurring only around the
border of the objects. This high accuracy was obtained with a training set of
only 30 labeled images, paving the way for fast adaptation to various
application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and
Background Signatures I
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images
Semantic segmentation models based on convolutional neural networks (CNNs)
have gained much attention in relation to remote sensing and have achieved
remarkable performance for the extraction of buildings from high-resolution
aerial images. However, the issue of limited generalization for unseen images
remains. When there is a domain gap between the training and test datasets,
CNN-based segmentation models trained by a training dataset fail to segment
buildings for the test dataset. In this paper, we propose segmentation networks
based on a domain adaptive transfer attack (DATA) scheme for building
extraction from aerial images. The proposed system combines the domain transfer
and adversarial attack concepts. Based on the DATA scheme, the distribution of
the input images can be shifted to that of the target images while turning
images into adversarial examples against a target network. Defending
adversarial examples adapted to the target domain can overcome the performance
degradation due to the domain gap and increase the robustness of the
segmentation model. Cross-dataset experiments and the ablation study are
conducted for the three different datasets: the Inria aerial image labeling
dataset, the Massachusetts building dataset, and the WHU East Asia dataset.
Compared to the performance of the segmentation network without the DATA
scheme, the proposed method shows improvements in the overall IoU. Moreover, it
is verified that the proposed method outperforms even when compared to feature
adaptation (FA) and output space adaptation (OSA).Comment: 11pages, 12 figure
The role of earth observation in an integrated deprived area mapping “system” for low-to-middle income countries
Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11—Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups
Mapping solar array location, size, and capacity using deep learning and overhead imagery
The effective integration of distributed solar photovoltaic (PV) arrays into
existing power grids will require access to high quality data; the location,
power capacity, and energy generation of individual solar PV installations.
Unfortunately, existing methods for obtaining this data are limited in their
spatial resolution and completeness. We propose a general framework for
accurately and cheaply mapping individual PV arrays, and their capacities, over
large geographic areas. At the core of this approach is a deep learning
algorithm called SolarMapper - which we make publicly available - that can
automatically map PV arrays in high resolution overhead imagery. We estimate
the performance of SolarMapper on a large dataset of overhead imagery across
three US cities in California. We also describe a procedure for deploying
SolarMapper to new geographic regions, so that it can be utilized by others. We
demonstrate the effectiveness of the proposed deployment procedure by using it
to map solar arrays across the entire US state of Connecticut (CT). Using these
results, we demonstrate that we achieve highly accurate estimates of total
installed PV capacity within each of CT's 168 municipal regions
- …