2,026 research outputs found
A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
In this paper we present a practical approach for generating an
occlusion-free textured 3D map of urban facades by the synergistic use of
terrestrial images, 3D point clouds and area-based information. Particularly in
dense urban environments, the high presence of urban objects in front of the
facades causes significant difficulties for several stages in computational
building modeling. Major challenges lie on the one hand in extracting complete
3D facade quadrilateral delimitations and on the other hand in generating
occlusion-free facade textures. For these reasons, we describe a
straightforward approach for completing and recovering facade geometry and
textures by exploiting the data complementarity of terrestrial multi-source
imagery and area-based information
ZRG: A High Resolution 3D Residential Rooftop Geometry Dataset for Machine Learning
In this paper we present the Zeitview Rooftop Geometry (ZRG) dataset. ZRG
contains thousands of samples of high resolution orthomosaics of aerial imagery
of residential rooftops with corresponding digital surface models (DSM), 3D
rooftop wireframes, and multiview imagery generated point clouds for the
purpose of residential rooftop geometry and scene understanding. We perform
thorough benchmarks to illustrate the numerous applications unlocked by this
dataset and provide baselines for the tasks of roof outline extraction,
monocular height estimation, and planar roof structure extraction
3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction
Reconstructing urban areas in 3D out of satellite raster images has been a
long-standing and challenging goal of both academical and industrial research.
The rare methods today achieving this objective at a Level Of Details rely
on procedural approaches based on geometry, and need stereo images and/or LIDAR
data as input. We here propose a method for urban 3D reconstruction named
KIBS(\textit{Keypoints Inference By Segmentation}), which comprises two novel
features: i) a full deep learning approach for the 3D detection of the roof
sections, and ii) only one single (non-orthogonal) satellite raster image as
model input. This is achieved in two steps: i) by a Mask R-CNN model performing
a 2D segmentation of the buildings' roof sections, and after blending these
latter segmented pixels within the RGB satellite raster image, ii) by another
identical Mask R-CNN model inferring the heights-to-ground of the roof
sections' corners via panoptic segmentation, unto full 3D reconstruction of the
buildings and city. We demonstrate the potential of the KIBS method by
reconstructing different urban areas in a few minutes, with a Jaccard index for
the 2D segmentation of individual roof sections of and on
our two data sets resp., and a height's mean error of such correctly segmented
pixels for the 3D reconstruction of m and m on our two data sets
resp., hence within the LOD2 precision range
Automated 3D scene reconstruction from open geospatial data sources: airborne laser scanning and a 2D topographic database
Open geospatial data sources provide opportunities for low cost 3D scene reconstruction. In this study, based on a sparse airborne laser scanning (ALS) point cloud (0.8 points/m2) obtained from open source databases, a building reconstruction pipeline for CAD building models was developed. The pipeline includes voxel-based roof patch segmentation, extraction of the key-points representing the roof patch outline, step edge identification and adjustment, and CAD building model generation. The advantages of our method lie in generating CAD building models without the step of enforcing the edges to be parallel or building regularization. Furthermore, although it has been challenging to use sparse datasets for 3D building reconstruction, our result demonstrates the great potential in such applications. In this paper, we also investigated the applicability of open geospatial datasets for 3D road detection and reconstruction. Road central lines were acquired from an open source 2D topographic database. ALS data were utilized to obtain the height and width of the road. A constrained search method (CSM) was developed for road width detection. The CSM method was conducted by splitting a given road into patches according to height and direction criteria. The road edges were detected patch by patch. The road width was determined by the average distance from the edge points to the central line. As a result, 3D roads were reconstructed from ALS and a topographic database
Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds
Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a divide-and-conquer scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected vertices to produce triangulated mesh models. These triangulated mesh models are suitable for many applications, such as 3D mapping, urban planning and augmented reality
Recommended from our members
A Sparsity-Inducing Optimization-Based Algorithm for Planar Patches Extraction from Noisy Point-Cloud Data
Currently, much of the manual labor needed to generate as-built Building Information Models (BIMs) of existing facilities is spent converting raw Point Cloud Datasets (PCDs) to BIMs descriptions. Automating the PCD conversion process can drastically reduce the cost of generating as-built BIMs. Due to the widespread existence of planar structures in civil infrastructures, detecting and extracting planar patches from raw PCDs is a fundamental step in the conversion pipeline from PCDs to BIMs. However, existing methods cannot effectively address both automatically detecting and extracting planar patches from infrastructure PCDs. The existing methods cannot resolve the problem due to the large scale and model complexity of civil infrastructure, or due to the requirements of extra constraints or known information. To address the problem, this paper presents a novel framework for automatically detecting and extracting planar patches from large-scale and noisy raw PCDs. The proposed method automatically detects planar structures, estimates the parametric plane models, and determines the boundaries of the planar patches. The first step recovers existing linear dependence relationships amongst points in the PCD by solving a group-sparsity inducing optimization problem. Next, a spectral clustering procedure based on the recovered linear dependence relationships segments the PCD. Then, for each segmented group, model parameters of the extracted planes are estimated via Singular Value Decomposition (SVD) and Maximum Likelihood Estimation Sample Consensus (MLESAC). Finally, the α-shape algorithm detects the boundaries of planar structures based on a projection of the data to the planar model. The proposed approach is evaluated comprehensively by experiments on two types of PCDs from real-world infrastructures, one captured directly by laser scanners and the other reconstructed from video using structure-from-motion techniques. In order to evaluate the performance comprehensively, five evaluation metrics are proposed which measure different aspects of performance. Experimental results reveal that the proposed method outperforms the existing methods, in the sense that the method automatically and accurately extracts planar patches from large-scaled raw PCDs without any extra constraints nor user assistance.This is the accepted manuscript. The final version is available from Wiley at http://onlinelibrary.wiley.com/doi/10.1111/mice.12063/abstract
- …