14,260 research outputs found

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    Extracting Context Information from Aerial Imagery for Aiding Threat Detection

    Get PDF
    Advances in computer vision have led to development of algorithms that are able to extract semantic information from images/video in order to make high level inferences from data. One of the major steps toward extracting semantic information is to identify useful contextual information present in the scene. In this research, we present a novel technique to extract context information from aerial imagery using concatenated vectors of low level features. The objective of this research is to aid in the identification of threats along the right of way of energy pipelines. The key observation of this research is that aerial imagery consists of various image segments like roads, buildings and trees along with lots of plain ground. All aforementioned segments of the image have definitive properties in terms of low level features. The information content present in plain ground is minimal when compared to other regions in the image. This characteristic was exploited to have a simple thresholding procedure designed on the basis of relative variance and entropy for fast background elimination. Trees are rich in textural content, buildings have higher contrast information and roads have discriminative color features. In this research we have extracted local phase information and local contrast information using the monogenic signal model. These features are used to train a support vector machine (SVM) which is then used for classification. In order to refine the segmentation process, we apply morphological operations on the result of the classifier. We present the results obtained by using the proposed method on various data sets captured using different camera sensors.https://ecommons.udayton.edu/stander_posters/1333/thumbnail.jp
    • …
    corecore