71,999 research outputs found

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    A Data-Driven Edge-Preserving D-bar Method for Electrical Impedance Tomography

    Full text link
    In Electrical Impedance Tomography (EIT), the internal conductivity of a body is recovered via current and voltage measurements taken at its surface. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods, of which D-bar is the only proven method. The resulting EIT images have low spatial resolution due to smoothing caused by low-pass filtered regularization. In many applications, such as medical imaging, it is known \emph{a priori} that the target contains sharp features such as organ boundaries, as well as approximate ranges for realistic conductivity values. In this paper, we use this information in a new edge-preserving EIT algorithm, based on the original D-bar method coupled with a deblurring flow stopped at a minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called {\em CGO sinogram}. This nonlinear data step provides superior robustness over traditional EIT data formats such as current-to-voltage matrices or Dirichlet-to-Neumann operators, for commonly used current patterns.Comment: 24 pages, 11 figure
    • …
    corecore