2,026 research outputs found

    A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data

    Get PDF
    In this paper we present a practical approach for generating an occlusion-free textured 3D map of urban facades by the synergistic use of terrestrial images, 3D point clouds and area-based information. Particularly in dense urban environments, the high presence of urban objects in front of the facades causes significant difficulties for several stages in computational building modeling. Major challenges lie on the one hand in extracting complete 3D facade quadrilateral delimitations and on the other hand in generating occlusion-free facade textures. For these reasons, we describe a straightforward approach for completing and recovering facade geometry and textures by exploiting the data complementarity of terrestrial multi-source imagery and area-based information

    ZRG: A High Resolution 3D Residential Rooftop Geometry Dataset for Machine Learning

    Full text link
    In this paper we present the Zeitview Rooftop Geometry (ZRG) dataset. ZRG contains thousands of samples of high resolution orthomosaics of aerial imagery of residential rooftops with corresponding digital surface models (DSM), 3D rooftop wireframes, and multiview imagery generated point clouds for the purpose of residential rooftop geometry and scene understanding. We perform thorough benchmarks to illustrate the numerous applications unlocked by this dataset and provide baselines for the tasks of roof outline extraction, monocular height estimation, and planar roof structure extraction

    3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction

    Full text link
    Reconstructing urban areas in 3D out of satellite raster images has been a long-standing and challenging goal of both academical and industrial research. The rare methods today achieving this objective at a Level Of Details 22 rely on procedural approaches based on geometry, and need stereo images and/or LIDAR data as input. We here propose a method for urban 3D reconstruction named KIBS(\textit{Keypoints Inference By Segmentation}), which comprises two novel features: i) a full deep learning approach for the 3D detection of the roof sections, and ii) only one single (non-orthogonal) satellite raster image as model input. This is achieved in two steps: i) by a Mask R-CNN model performing a 2D segmentation of the buildings' roof sections, and after blending these latter segmented pixels within the RGB satellite raster image, ii) by another identical Mask R-CNN model inferring the heights-to-ground of the roof sections' corners via panoptic segmentation, unto full 3D reconstruction of the buildings and city. We demonstrate the potential of the KIBS method by reconstructing different urban areas in a few minutes, with a Jaccard index for the 2D segmentation of individual roof sections of 88.55%88.55\% and 75.21%75.21\% on our two data sets resp., and a height's mean error of such correctly segmented pixels for the 3D reconstruction of 1.601.60 m and 2.062.06 m on our two data sets resp., hence within the LOD2 precision range

    Automated 3D scene reconstruction from open geospatial data sources: airborne laser scanning and a 2D topographic database

    Get PDF
    Open geospatial data sources provide opportunities for low cost 3D scene reconstruction. In this study, based on a sparse airborne laser scanning (ALS) point cloud (0.8 points/m2) obtained from open source databases, a building reconstruction pipeline for CAD building models was developed. The pipeline includes voxel-based roof patch segmentation, extraction of the key-points representing the roof patch outline, step edge identification and adjustment, and CAD building model generation. The advantages of our method lie in generating CAD building models without the step of enforcing the edges to be parallel or building regularization. Furthermore, although it has been challenging to use sparse datasets for 3D building reconstruction, our result demonstrates the great potential in such applications. In this paper, we also investigated the applicability of open geospatial datasets for 3D road detection and reconstruction. Road central lines were acquired from an open source 2D topographic database. ALS data were utilized to obtain the height and width of the road. A constrained search method (CSM) was developed for road width detection. The CSM method was conducted by splitting a given road into patches according to height and direction criteria. The road edges were detected patch by patch. The road width was determined by the average distance from the edge points to the central line. As a result, 3D roads were reconstructed from ALS and a topographic database

    Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Get PDF
    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a divide-and-conquer scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected vertices to produce triangulated mesh models. These triangulated mesh models are suitable for many applications, such as 3D mapping, urban planning and augmented reality
    • …
    corecore