Building and Infrastructure Defect Detection and Visualization Using Drone and Deep Learning Technologies

Abstract

This paper presents an accurate and stable method for object and defect detection and visualization on building and infrastructural facilities. This method uses drones and cameras to collect three- dimensional (3D) point clouds via photogrammetry, and uses orthographic or arbitrary views of the target objects to generate the feature images of points’ spectral, elevation, and normal features. U-Net is implemented in the pixelwise segmentation for object and defect detection using multiple feature images. This method was validated on four applications, including on-site path detection, pavement cracking detection, highway slope detection, and building facade window detection. The comparative experimental results confirmed that U-Net with multiple features has a better pixelwise segmentation performance than separately using each single feature. The developed method can implement object and defect detection with different shapes, including striped objects, thin objects, recurring and regularly shaped objects, and bulky objects, which will improve the accuracy and efficiency of inspection, assessment, and management of buildings and infrastructural facilities

    Similar works