864 research outputs found

    Usability of VGI for validation of land cover maps

    Get PDF
    Volunteered Geographic Information (VGI) represents a growing source of potentially valuable data for many applications, including land cover map validation. It is still an emerging field and many different approaches can be used to take value from VGI, but also many pros and cons are related to its use. Therefore, since it is timely to get an overview of the subject, the aim of this article is to review the use of VGI as reference data for land cover map validation. The main platforms and types of VGI that are used and that are potentially useful are analysed. Since quality is a fundamental issue in map validation, the quality procedures used by the platforms that collect VGI to increase and control data quality are reviewed and a framework for addressing VGI quality assessment is proposed. A review of cases where VGI was used as an additional data source to assist in map validation is made, as well as cases where only VGI was used, indicating the procedures used to assess VGI quality and fitness for use. A discussion and some conclusions are drawn on best practices, future potential and the challenges of the use of VGI for land cover map validation

    TaLAM: Mapping Land Cover in Lowlands and Uplands with Satellite Imagery

    Get PDF
    End-of-Project ReportThe Towards Land Cover Accounting and Monitoring (TaLAM) project is part of Ireland’s response to creating a national land cover mapping programme. Its aims are to demonstrate how the new digital map of Ireland, Prime2, from Ordnance Survey Ireland (OSI), can be combined with satellite imagery to produce land cover maps

    Enabling Decision-Support Systems through Automated Cell Tower Detection

    Full text link
    Cell phone coverage and high-speed service gaps persist in rural areas in sub-Saharan Africa, impacting public access to mobile-based financial, educational, and humanitarian services. Improving maps of telecommunications infrastructure can help inform strategies to eliminate gaps in mobile coverage. Deep neural networks, paired with remote sensing images, can be used for object detection of cell towers and eliminate the need for inefficient and burdensome manual mapping to find objects over large geographic regions. In this study, we demonstrate a partially automated workflow to train an object detection model to locate cell towers using OpenStreetMap (OSM) features and high-resolution Maxar imagery. For model fine-tuning and evaluation, we curated a diverse dataset of over 6,000 unique images of cell towers in 26 countries in eastern, southern, and central Africa using automatically generated annotations from OSM points. Our model achieves an average precision at 50% Intersection over Union (IoU) (AP@50) of 81.2 with good performance across different geographies and out-of-sample testing. Accurate localization of cell towers can yield more accurate cell coverage maps, in turn enabling improved delivery of digital services for decision-support applications

    Convolutional Neural Networks for Water segmentation using Sentinel-2 Red, Green, Blue (RGB) composites and derived Spectral Indices

    Get PDF
    Near-real time water segmentation with medium resolution satellite imagery plays a critical role in water management. Automated water segmentation of satellite imagery has traditionally been achieved using spectral indices. Spectral water segmentation is limited by environmental factors and requires human expertise to be applied effectively. In recent years, the use of convolutional neural networks (CNN’s) for water segmentation has been successful when used on high-resolution satellite imagery, but to a lesser extent for medium resolution imagery. Existing studies have been limited to geographically localized datasets and reported metrics have been benchmarked against a limited range of spectral indices. This study seeks to determine if a single CNN based on Red, Green, Blue (RGB) image classification can effectively segment water on a global scale and outperform traditional spectral methods. Additionally, this study evaluates the extent to which smaller datasets (of very complex pattern, e.g harbour megacities) can be used to improve globally applicable CNNs within a specific region. Multispectral imagery from the European Space Agency, Sentinel-2 satellite (10 m spatial resolution) was sourced. Test sites were selected in Florida, New York, and Shanghai to represent a globally diverse range of waterbody typologies. Region-specific spectral water segmentation algorithms were developed on each test site, to represent benchmarks of spectral index performance. DeepLabV3-ResNet101 was trained on 33,311 semantically labelled true-colour samples. The resulting model was retrained on three smaller subsets of the data, specific to New York, Shanghai and Florida. CNN predictions reached a maximum mean intersection over union result of 0.986 and F1-Score of 0.983. At the Shanghai test site, the CNN’s predictions outperformed the spectral benchmark, primarily due to the CNN’s ability to process contextual features at multiple scales. In all test cases, retraining the networks to localized subsets of the dataset improved the localized region’s segmentation predictions. The CNN’s presented are suitable for cloud-based deployment and could contribute to the wider use of satellite imagery for water management

    Water level prediction from social media images with a multi-task ranking approach

    Full text link
    Floods are among the most frequent and catastrophic natural disasters and affect millions of people worldwide. It is important to create accurate flood maps to plan (offline) and conduct (real-time) flood mitigation and flood rescue operations. Arguably, images collected from social media can provide useful information for that task, which would otherwise be unavailable. We introduce a computer vision system that estimates water depth from social media images taken during flooding events, in order to build flood maps in (near) real-time. We propose a multi-task (deep) learning approach, where a model is trained using both a regression and a pairwise ranking loss. Our approach is motivated by the observation that a main bottleneck for image-based flood level estimation is training data: it is diffcult and requires a lot of effort to annotate uncontrolled images with the correct water depth. We demonstrate how to effciently learn a predictor from a small set of annotated water levels and a larger set of weaker annotations that only indicate in which of two images the water level is higher, and are much easier to obtain. Moreover, we provide a new dataset, named DeepFlood, with 8145 annotated ground-level images, and show that the proposed multi-task approach can predict the water level from a single, crowd-sourced image with ~11 cm root mean square error.Comment: Accepted in ISPRS Journal 202
    • …
    corecore