239 research outputs found

    Large-scale Weakly Supervised Learning for Road Extraction from Satellite Imagery

    Full text link
    Automatic road extraction from satellite imagery using deep learning is a viable alternative to traditional manual mapping. Therefore it has received considerable attention recently. However, most of the existing methods are supervised and require pixel-level labeling, which is tedious and error-prone. To make matters worse, the earth has a diverse range of terrain, vegetation, and man-made objects. It is well known that models trained in one area generalize poorly to other areas. Various shooting conditions such as light and angel, as well as different image processing techniques further complicate the issue. It is impractical to develop training data to cover all image styles. This paper proposes to leverage OpenStreetMap road data as weak labels and large scale satellite imagery to pre-train semantic segmentation models. Our extensive experimental results show that the prediction accuracy increases with the amount of the weakly labeled data, as well as the road density in the areas chosen for training. Using as much as 100 times more data than the widely used DeepGlobe road dataset, our model with the D-LinkNet architecture and the ResNet-50 backbone exceeds the top performer of the current DeepGlobe leaderboard. Furthermore, due to large-scale pre-training, our model generalizes much better than those trained with only the curated datasets, implying great application potential

    Post-analysis of OSM-GAN Spatial Change Detection

    Get PDF
    Keeping crowdsourced maps up-to-date is important for a wide range of location-based applications (route planning, urban planning, navigation, tourism, etc.).We propose a novelmap updatingmechanism that combines the latest freely available remote sensing data with the current state of online vector map data to train a Deep Learning (DL) neural network. It uses a GenerativeAdversarial Network (GAN) to perform image-to-image translation, followed by segmentation and raster-vector comparison processes to identify changes to map features (e.g. buildings, roads, etc.) when compared to existing map data. This paper evaluates various GAN models trained with sixteen different datasets designed for use by our change detection/map updating procedure. Each GAN model is evaluated quantitatively and qualitatively to select the most accurate DL model for use in future spatial change detection applications

    Data Acquisition and Processing for GeoAI Models to Support Sustainable Agricultural Practices

    Get PDF
    There are growing opportunities to leverage new technologies and data sources to address global problems related to sustainability, climate change, and biodiversity loss. The emerging discipline of GeoAI resulting from the convergence of AI and Geospatial science (Geo-AI) is enabling the possibility to harness the increasingly available open Earth Observation data collected from different constellations of satellites and sensors with high spatial, spectral and temporal resolutions. However, transforming these raw data into high-quality datasets that could be used for training AI and specifically deep learning models are technically challenging. This paper describes the process and results of synthesizing labelled-datasets that could be used for training AI (specifically Convolutional Neural Networks) models for determining agricultural land use pattern to support decisions for sustainable farming. In our opinion, this work is a significant step forward in addressing the paucity of usable datasets for developing scalable GeoAI models for sustainable agriculture

    Enhancing safeguards analysts' geospatial usage.

    Full text link

    Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image

    Get PDF
    Vision-based identification of lane area and lane marking on the road is an indispensable function for intelligent driving vehicles, especially for localization, mapping and planning tasks. However, due to the increasing complexity of traffic scenes, such as occlusion and discontinuity, detecting lanes and lane markings from an image captured by a monocular camera becomes persistently challenging. The lanes and lane markings have a strong position correlation and are constrained by a spatial geometry prior to the driving scene. Most existing studies only explore a single task, i.e., either lane marking or lane detection, and do not consider the inherent connection or exploit the modeling of this kind of relationship between both elements to improve the detection performance of both tasks. In this paper, we establish a novel multi-task encoder–decoder framework for the simultaneous detection of lanes and lane markings. This approach deploys a dual-branch architecture to extract image information from different scales. By revealing the spatial constraints between lanes and lane markings, we propose an interactive attention learning for their feature information, which involves a Deformable Feature Fusion module for feature encoding, a Cross-Context module as information decoder, a Cross-IoU loss and a Focal-style loss weighting for robust training. Without bells and whistles, our method achieves state-of-the-art results on tasks of lane marking detection (with 32.53% on IoU, 81.61% on accuracy) and lane segmentation (with 91.72% on mIoU) of the BDD100K dataset, which showcases an improvement of 6.33% on IoU, 11.11% on accuracy in lane marking detection and 0.22% on mIoU in lane detection compared to the previous methods

    Imagery to the Crowd, MapGive, and the CyberGIS: Open Source Innovation in the Geographic and Humanitarian Domains

    Get PDF
    The MapGive initiative is a State Department project designed to increase the amount of free and open geographic data in areas either experiencing, or at risk of, a humanitarian emergency. To accomplish this, MapGive seeks to link the cognitive surplus and good will of volunteer mappers who freely contribute their time and effort to map areas at risk, with the purchasing power of the United States Government (USG), who can act as a catalyzing force by making updated high resolution commercial satellite imagery available for volunteer mapping. Leveraging the CyberGIS, a geographic computing infrastructure built from open source software, MapGive publishes updated satellite imagery as web services that can be quickly and easily accessed via the internet, allowing volunteer mappers to trace the imagery to extract visible features like roads and buildings without having to process the imagery themselves. The resulting baseline geographic data, critical to addressing humanitarian data gaps, is stored in the OpenStreetMap (OSM) database, a free, editable geographic database for the world under a license that ensures the data will remain open in perpetuity, ensuring equal access to all. MapGive is built upon a legal, policy, and technological framework developed during the Imagery to the Crowd phase of the project. Philosophically, these projects are grounded in the open source software movement and the application of commons-based peer production models to geographic data. These concepts are reviewed, as is a reconception of Geographic Information Systems (GIS) called GIS 2.0

    The future of Earth observation in hydrology

    Get PDF
    In just the past 5 years, the field of Earth observation has progressed beyond the offerings of conventional space-agency-based platforms to include a plethora of sensing opportunities afforded by CubeSats, unmanned aerial vehicles (UAVs), and smartphone technologies that are being embraced by both for-profit companies and individual researchers. Over the previous decades, space agency efforts have brought forth well-known and immensely useful satellites such as the Landsat series and the Gravity Research and Climate Experiment (GRACE) system, with costs typically of the order of 1 billion dollars per satellite and with concept-to-launch timelines of the order of 2 decades (for new missions). More recently, the proliferation of smart-phones has helped to miniaturize sensors and energy requirements, facilitating advances in the use of CubeSats that can be launched by the dozens, while providing ultra-high (3-5 m) resolution sensing of the Earth on a daily basis. Start-up companies that did not exist a decade ago now operate more satellites in orbit than any space agency, and at costs that are a mere fraction of traditional satellite missions. With these advances come new space-borne measurements, such as real-time high-definition video for tracking air pollution, storm-cell development, flood propagation, precipitation monitoring, or even for constructing digital surfaces using structure-from-motion techniques. Closer to the surface, measurements from small unmanned drones and tethered balloons have mapped snow depths, floods, and estimated evaporation at sub-metre resolutions, pushing back on spatio-temporal constraints and delivering new process insights. At ground level, precipitation has been measured using signal attenuation between antennae mounted on cell phone towers, while the proliferation of mobile devices has enabled citizen scientists to catalogue photos of environmental conditions, estimate daily average temperatures from battery state, and sense other hydrologically important variables such as channel depths using commercially available wireless devices. Global internet access is being pursued via high-altitude balloons, solar planes, and hundreds of planned satellite launches, providing a means to exploit the "internet of things" as an entirely new measurement domain. Such global access will enable real-time collection of data from billions of smartphones or from remote research platforms. This future will produce petabytes of data that can only be accessed via cloud storage and will require new analytical approaches to interpret. The extent to which today's hydrologic models can usefully ingest such massive data volumes is unclear. Nor is it clear whether this deluge of data will be usefully exploited, either because the measurements are superfluous, inconsistent, not accurate enough, or simply because we lack the capacity to process and analyse them. What is apparent is that the tools and techniques afforded by this array of novel and game-changing sensing platforms present our community with a unique opportunity to develop new insights that advance fundamental aspects of the hydrological sciences. To accomplish this will require more than just an application of the technology: in some cases, it will demand a radical rethink on how we utilize and exploit these new observing systems
    corecore