9 research outputs found

    Towards post-disaster debris identification for precise damage and recovery assessments from uav and satellite images

    Get PDF

    TOWARDS A MORE EFFICIENT DETECTION OF EARTHQUAKE INDUCED FAƇADE DAMAGES USING OBLIQUE UAV IMAGERY

    Get PDF
    Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for faƧade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient faƧade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the faƧades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for faƧade location in the oblique images captured in subsequent flights, and iii) selection of the faƧade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted faƧade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of Lā€™Aquila, Italy, affected in 2009 by an earthquake

    DAMAGE DETECTION ON BUILDING FAƇADES USING MULTI-TEMPORAL AERIAL OBLIQUE IMAGERY

    Get PDF
    Over the past decades, a special interest has been given to remote-sensing imagery to automate the detection of damaged buildings. Given the large areas it may cover and the possibility of automation of the damage detection process, when comparing with lengthy and costly ground observations. Currently, most image-based damage detection approaches rely on Convolutional Neural Networks (CNN). These are used to determine if a given image patch shows damage or not in a binary classification approach. However, such approaches are often trained using image samples containing only debris and rubble piles. Since such approaches often aim at detecting partial or totally collapsed buildings from remote-sensing imagery. Hence, such approaches might not be applicable when the aim is to detect faƧade damages. This is due to the fact that faƧade damages also include spalling, cracks and other small signs of damage. Only a few studies focus their damage analysis on the faƧade and a multi-temporal approach is still missing. In this paper, a multi-temporal approach specifically designed for the image classification of faƧade damages is presented. To this end, three multi-temporal approaches are compared with two mono-temporal approaches. Regarding the multi-temporal approaches the objective is to understand the optimal fusion between the two imagery epochs within a CNN. The results show that the multi-temporal approaches outperform the mono-temporal ones by up to 22% in accuracy

    Identification of structurally damaged areas in airborne oblique images using a visual-bag-of-words approach

    Get PDF
    Automatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform, sensor-view, image scale, and scene complexity. Therefore, a generalized method for damage detection that is impervious to the mentioned image characteristics is desirable. This study aims to develop a method to perform grid-level damage classification of remote sensing images by detecting the damage corresponding to debris, rubble piles, and heavy spalling within a defined grid, regardless of the aforementioned image characteristics. The Visual-Bag-of-Words (BoW) is one of the most widely used and proven frameworks for image classification in the field of computer vision. The framework adopts a kind of feature representation strategy that has been shown to be more efficient for image classificationā€”regardless of the scale and clutterā€”than conventional global feature representations. In this study supervised models using various radiometric descriptors (histogram of gradient orientations (HoG) and Gabor wavelets) and classifiers (SVM, Random Forests, and Adaboost) were developed for damage classification based on both BoW and conventional global feature representations, and tested with four datasets. Those vary according to the aforementioned image characteristics. The BoW framework outperformed conventional global feature representation approaches in all scenarios (i.e., for all combinations of feature descriptors, classifiers, and datasets), and produced an average accuracy of approximately 90%. Particularly encouraging was an accuracy improvement by 14% (from 77% to 91%) produced by BoW over global representation for the most complex dataset, which was used to test the generalization capability

    Bridge Inspection: Human Performance, Unmanned Aerial Systems and Automation

    Get PDF
    Unmanned aerial systems (UASs) have become of considerable private and commercial interest for a variety of jobs and entertainment in the past 10 years. This paper is a literature review of the state of practice for the United States bridge inspection programs and outlines how automated and unmanned bridge inspections can be made suitable for present and future needs. At its best, current technology limits UAS use to an assistive tool for the inspector to perform a bridge inspection faster, safer, and without traffic closure. The major challenges for UASs are satisfying restrictive Federal Aviation Administration regulations, control issues in a GPS-denied environment, pilot expenses and availability, time and cost allocated to tuning, maintenance, post-processing time, and acceptance of the collected data by bridge owners. Using UASs with self-navigation abilities and improving image-processing algorithms to provide results near real-time could revolutionize the bridge inspection industry by providing accurate, multi-use, autonomous three-dimensional models and damage identification

    Non-Contact Evaluation Methods for Infrastructure Condition Assessment

    Get PDF
    The United States infrastructure, e.g. roads and bridges, are in a critical condition. Inspection, monitoring, and maintenance of these infrastructure in the traditional manner can be expensive, dangerous, time-consuming, and tied to human judgment (the inspector). Non-contact methods can help overcoming these challenges. In this dissertation two aspects of non-contact methods are explored: inspections using unmanned aerial systems (UASs), and conditions assessment using image processing and machine learning techniques. This presents a set of investigations to determine a guideline for remote autonomous bridge inspections

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF
    corecore