3,087 research outputs found
Learning Aerial Image Segmentation from Online Maps
This study deals with semantic segmentation of high-resolution (aerial)
images where a semantic class label is assigned to each pixel via supervised
classification as a basis for automatic map generation. Recently, deep
convolutional neural networks (CNNs) have shown impressive performance and have
quickly become the de-facto standard for semantic segmentation, with the added
benefit that task-specific feature design is no longer necessary. However, a
major downside of deep learning methods is that they are extremely data-hungry,
thus aggravating the perennial bottleneck of supervised classification, to
obtain enough annotated training data. On the other hand, it has been observed
that they are rather robust against noise in the training labels. This opens up
the intriguing possibility to avoid annotating huge amounts of training data,
and instead train the classifier from existing legacy data or crowd-sourced
maps which can exhibit high levels of noise. The question addressed in this
paper is: can training with large-scale, publicly available labels replace a
substantial part of the manual labeling effort and still achieve sufficient
performance? Such data will inevitably contain a significant portion of errors,
but in return virtually unlimited quantities of it are available in larger
parts of the world. We adapt a state-of-the-art CNN architecture for semantic
segmentation of buildings and roads in aerial images, and compare its
performance when using different training data sets, ranging from manually
labeled, pixel-accurate ground truth of the same city to automatic training
data derived from OpenStreetMap data from distant locations. We report our
results that indicate that satisfying performance can be obtained with
significantly less manual annotation effort, by exploiting noisy large-scale
training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN
Case studies on data-rich and data-poor countries
The aim of Work Package 5 is to assess the needs of decision-makers and end-users involved in
the process of post-disaster recovery and to provide useful guidance, tools and recommendations
for extracting information from the affected area to help with their decisions. This report follows
from Deliverables D5.1 âComparison of outcomes with end-user needsâ and D5.2 âSemi-automated
data extractionâ where the team had set out to explore the needs of decision-makers and
suggested protocols for tools to address their information requirements. This report begins with a
summary of findings from the scenario planning game and a review of end-user priorities; it will
then describe the methods of detecting post-disaster recovery evaluation and monitoring attributes
to aid decision making.
The proposed methods in the deliverables D2.6 âSupervised/Unsupervised change detectionâ
and D5.2 âSemi-automated data extractionâ for use in post-disaster recovery evaluation and
monitoring are tested in detail for data-poor and data-rich scenarios. Semi-automated and
automated methods of finding the recovery indicators pertaining to early recovery and monitoring
are discussed.
Step-by-step guidance for an analyst to follow in order to prepare the images and GIS data layers
necessary to execute the semi-automated and automated methods are discussed in section
2. The outputs are presented in detail using case studies in section 3. In order to develop and
assess the proposed detection methods, images from two case studies, namely Van in Turkey and
Muzaffarabad in Pakistan, both recovering from recent earthquakes, have been used to highlight
the differences between data-rich and data-poor countries and hence the constraints on outputs on
the proposed methods
MapSnap System to Perform Vector-to-Raster Fusion
As the availability of geospatial data increases, there is a growing need to match these datasets together. However, since these datasets often vary in their origins and spatial accuracy, they frequently do not correspond well to each other, which create multiple problems. To accurately align with imagery, analysts currently either: 1) manually move the vectors, 2) perform a labor-intensive spatial registration of vectors to imagery, 3) move imagery to vectors, or 4) redigitize the vectors from scratch and transfer the attributes. All of these are time consuming and labor-intensive operations. Automated matching and fusing vector datasets has been a subject of research for years, and strides are being made. However, much less has been done with matching or fusing vector and raster data. While there are initial forays into this research area, the approaches are not robust. The objective of this work is to design and build robust software called MapSnap to conflate vector and image data in an automated/semi-automated manner. This paper reports the status of the MapSnap project that includes: (i) the overall algorithmic approach and system architecture, (ii) a tiling approach to deal with large datasets to tune MapSnap parameters, (iii) time comparison of MapSnap with re-digitizing the vectors from scratch and transfer the attributes, and (iv) accuracy comparison of MapSnap with manual adjustment of vectors. The paper concludes with the discussion of future work including addressing the general problem of continuous and rapid updating vector data, and fusing vector data with other data
Detecting Urban Road Changes using Segmentation and Vector Analysis
The rapid growth of urbanization is driving increased road infrastructure development. Detecting and monitoring changes in urban road areas is challenging for city planners. This research proposes using semantic segmentation and vector analysis on high-resolution images to identify road network changes. The U-Net model performs semantic segmentation, pre-trained on a Massachusetts road dataset, predicting labels for a specific area with temporal data and co-registration to reduce distortions. Predicted labels are converted to shapefiles for vector analysis. Satellite images from Google Earth archives demonstrate the change detection process. The outcome of this predictive phase was the transformation of projected labels into shapefiles, thereby facilitating vector analysis to pinpoint and characterize alterations
- âŠ