177 research outputs found

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    The 1356 Basel earthquake: an interdisciplinary revision

    Get PDF
    Within historical times one of the most damaging events in intra-plate Europe was the 1356 Basel earthquake. Given its significance for assessing regional seismic hazard in central Europe, an interdisciplinary project was launched in 2005 to re-explore this event. Our effort aimed to incorporate techniques from history, seismology, archaeology, paleoseismology and engineering. New and reinterpreted historical data from Basel and its surroundings plus archaeological findings on buildings that survived the event and still exist enabled this macroseismic assessment. Palaeoseismological studies combined with historical evidence provided additional data. For the surrounding areas, archaeology offers sparse information on some castles and churches, sometimes supported by historical records. A contemporary source allows some reconstruction of the stronger fore- and aftershocks. This expanded information base improves our sense of the event's damage and consequences. For the city of Basel, the relatively abundant archaeological data allowed us to assess statistically the macroseismic intensity at IX, although the pattern of damage was scattered. Data points for the expected area of damage around Basel are not distributed regularly. The absence of historical and archaeological findings for southern Germany might be due to archival problems; future investigation may improve this situation. Our results confirm that the Basel earthquake was the most destructive known for central Europe. Intensities up to VIII are found within a radius of about 30 km. Analysing the macroseismic field confirms our former assessment of the event and shows an epicenter located about 10 km south of Basel. The most probable range for the moment magnitude Mw is between 6.7 and 7.

    Audience-dependent explanations for AI-based risk management tools : a survey

    Get PDF
    Artificial Intelligence (AI) is one of the most sought-after innovations in the financial industry. However, with its growing popularity, there also is the call for AI-based models to be understandable and transparent. However, understandably explaining the inner mechanism of the algorithms and their interpretation is entirely audience-dependent. The established literature fails to match the increasing number of explainable AI (XAI) methods with the different stakeholders’ explainability needs. This study addresses this gap by exploring how various stakeholders within the Swiss financial industry view explainability in their respective contexts. Based on a series of interviews with practitioners within the financial industry, we provide an in-depth review and discussion of their view on the potential and limitation of current XAI techniques needed to address the different requirements for explanations

    Sub-kilometre scale distribution of snow depth on Arctic sea ice from Soviet drifting stations

    Get PDF
    The sub-kilometre scale distribution of snow depth on Arctic sea ice impacts atmosphere-ice fluxes of energy and mass, and is of importance for satellite estimates of sea-ice thickness from both radar and lidar altimeters. While information about the mean of this distribution is increasingly available from modelling and remote sensing, the full distribution cannot yet be resolved. We analyse 33 539 snow depth measurements from 499 transects taken at Soviet drifting stations between 1955 and 1991 and derive a simple statistical distribution for snow depth over multi-year ice as a function of only the mean snow depth. We then evaluate this snow depth distribution against snow depth transects that span first-year ice to multiyear ice from the MOSAiC, SHEBA and AMSR-Ice field campaigns. Because the distribution can be generated using only the mean snow depth, it can be used in the downscaling of several existing snow depth products for use in flux modelling and altimetry studies
    • …
    corecore