2,757 research outputs found

    Semantic Segmentation and Change Detection in Satellite Imagery

    Get PDF
    Processing of satellite images using deep learning and computer vision methods is needed for urban planning, crop assessments, disaster management, and rescue and recovery operations. Deep learning methods which are trained on ground-based imagery do not translate well to satellite imagery. In this thesis, we focus on the tasks of semantic segmentation and change detection in satellite imagery. A segmentation framework is presented based on existing waterfall-based modules. The proposed framework, called PyramidWASP, or PyWASP for short, can be used with two modules. PyWASP with the Waterfall Atrous Spatial Pooling (WASP) module investigates the effects of adding a feature pyramid network (FPN) to WASP. PyWASP with the improved WASP module (WASPv2) determines the effects of adding pyramid features to WASPv2. The pyramid features incorporate multi-scale feature representation into the network. This is useful for high-resolution satellite images, as they are known for having objects of varying scales. The two networks are tested on two datasets containing satellite images and one dataset containing ground-based images. The change detection method identifies building differences in registered satellite images of areas that have gone through drastic changes due to natural disasters. The proposed method is called Siamese Vision Transformers for Change Detection or SiamViT-CD for short. Vision transformers have been gaining popularity recently as they learn features well by using positional embedding information and a self-attention module. In this method, the Siamese branches, containing vision transformers with shared weights and parameters, accept a pair of satellite images and generate embedded patch-wise transformer features. These features are then processed by a classifier for patch-level change detection. The classifier predictions are further processed to generate change maps and the final predicted mask contains damage levels for all the buildings in the image. The robustness of the method is also tested by adding weather-related disturbances to satellite images

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Remote Sensing and Deep Learning to Understand Noisy OpenStreetMap

    Get PDF
    The OpenStreetMap (OSM) project is an open-source, community-based, user-generated street map/data service. It is the most popular project within the state of the art for crowdsourcing. Although geometrical features and tags of annotations in OSM are usually precise (particularly in metropolitan areas), there are instances where volunteer mapping is inaccurate. Despite the appeal of using OSM semantic information with remote sensing images, to train deep learning models, the crowdsourced data quality is inconsistent. High-resolution remote sensing image segmentation is a mature application in many fields, such as urban planning, updated mapping, city sensing, and others. Typically, supervised methods trained with annotated data may learn to anticipate the object location, but misclassification may occur due to noise in training data. This article combines Very High Resolution (VHR) remote sensing data with computer vision methods to deal with noisy OSM. This work deals with OSM misalignment ambiguity (positional inaccuracy) concerning satellite imagery and uses a Convolutional Neural Network (CNN) approach to detect missing buildings in OSM. We propose a translating method to align the OSM vector data with the satellite data. This strategy increases the correlation between the imagery and the building vector data to reduce the noise in OSM data. A series of experiments demonstrate that our approach plays a significant role in (1) resolving the misalignment issue, (2) instance-semantic segmentation of buildings with missing building information in OSM (never labeled or constructed in between image acquisitions), and (3) change detection mapping. The good results of precision (0.96) and recall (0.96) demonstrate the viability of high-resolution satellite imagery and OSM for building detection/change detection using a deep learning approach

    Utilization of Deep Learning for Mapping Land Use Change Base on Geographic Information System: A Case Study of Liquefaction

    Get PDF
    This study aims to extract buildings and roads and determine the extent of changes before and after the liquefaction disaster. The research method used is automatic extraction. The data used are Google Earth images for 2017 and 2018. The data analysis technique uses the Deep Learning Geography Information System. The results showed that the extraction results of the built-up area were 23.61 ha and the undeveloped area was 147.53 ha. The total length of the road before the liquefaction disaster occurred was 35.50 km. The extraction result after the liquefaction disaster was that the area built up was 1.20 ha, while the buildings lost due to the disaster were 22.41 ha. The total road length prior to the liquefaction disaster was 35.50 km, only 11.20 km of roads were lost, 24.30 km. Deep Learning in Geographic Information Systems (GIS) is proliferating and has many advantages in all aspects of life, including technology, geography, health, education, social life, and disasters

    DAHiTrA: Damage Assessment Using a Novel Hierarchical Transformer Architecture

    Full text link
    This paper presents DAHiTrA, a novel deep-learning model with hierarchical transformers to classify building damages based on satellite images in the aftermath of hurricanes. An automated building damage assessment provides critical information for decision making and resource allocation for rapid emergency response. Satellite imagery provides real-time, high-coverage information and offers opportunities to inform large-scale post-disaster building damage assessment. In addition, deep-learning methods have shown to be promising in classifying building damage. In this work, a novel transformer-based network is proposed for assessing building damage. This network leverages hierarchical spatial features of multiple resolutions and captures temporal difference in the feature domain after applying a transformer encoder on the spatial features. The proposed network achieves state-of-the-art-performance when tested on a large-scale disaster damage dataset (xBD) for building localization and damage classification, as well as on LEVIR-CD dataset for change detection tasks. In addition, we introduce a new high-resolution satellite imagery dataset, Ida-BD (related to the 2021 Hurricane Ida in Louisiana in 2021, for domain adaptation to further evaluate the capability of the model to be applied to newly damaged areas with scarce data. The domain adaptation results indicate that the proposed model can be adapted to a new event with only limited fine-tuning. Hence, the proposed model advances the current state of the art through better performance and domain adaptation. Also, Ida-BD provides a higher-resolution annotated dataset for future studies in this field
    • …
    corecore