171 research outputs found
Recommended from our members
A Sparsity-Inducing Optimization-Based Algorithm for Planar Patches Extraction from Noisy Point-Cloud Data
Currently, much of the manual labor needed to generate as-built Building Information Models (BIMs) of existing facilities is spent converting raw Point Cloud Datasets (PCDs) to BIMs descriptions. Automating the PCD conversion process can drastically reduce the cost of generating as-built BIMs. Due to the widespread existence of planar structures in civil infrastructures, detecting and extracting planar patches from raw PCDs is a fundamental step in the conversion pipeline from PCDs to BIMs. However, existing methods cannot effectively address both automatically detecting and extracting planar patches from infrastructure PCDs. The existing methods cannot resolve the problem due to the large scale and model complexity of civil infrastructure, or due to the requirements of extra constraints or known information. To address the problem, this paper presents a novel framework for automatically detecting and extracting planar patches from large-scale and noisy raw PCDs. The proposed method automatically detects planar structures, estimates the parametric plane models, and determines the boundaries of the planar patches. The first step recovers existing linear dependence relationships amongst points in the PCD by solving a group-sparsity inducing optimization problem. Next, a spectral clustering procedure based on the recovered linear dependence relationships segments the PCD. Then, for each segmented group, model parameters of the extracted planes are estimated via Singular Value Decomposition (SVD) and Maximum Likelihood Estimation Sample Consensus (MLESAC). Finally, the α-shape algorithm detects the boundaries of planar structures based on a projection of the data to the planar model. The proposed approach is evaluated comprehensively by experiments on two types of PCDs from real-world infrastructures, one captured directly by laser scanners and the other reconstructed from video using structure-from-motion techniques. In order to evaluate the performance comprehensively, five evaluation metrics are proposed which measure different aspects of performance. Experimental results reveal that the proposed method outperforms the existing methods, in the sense that the method automatically and accurately extracts planar patches from large-scaled raw PCDs without any extra constraints nor user assistance.This is the accepted manuscript. The final version is available from Wiley at http://onlinelibrary.wiley.com/doi/10.1111/mice.12063/abstract
Detection of key components of existing bridge in point cloud datasets
The cost and effort for modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. Automating the point cloud-to-Bridge Information Models process can drastically reduce the
manual effort and cost involved. Previous research has achieved the automatic generation of surfaces primitives combined with rule-based classification to create labelled construction models from point clouds. These methods work very well in synthetic dataset or idealized cases. However, real bridge point clouds are often incomplete, and contain unevenly distributed points. Also, bridge geometries are complex. They are defined with horizontal
alignments, vertical elevations and cross-sections. These characteristics are the reasons behind the performance issues existing methods have in real datasets. We propose to tackle this challenge via a novel top-down method for major bridge component detection in this paper. Our method bypasses the surface generation process altogether. Firstly, this method uses a slicing algorithm to separate deck assembly from pier assemblies. It then detects pier caps using their surface normal, and uses oriented bounding boxes and density histograms to segment the girders. Finally, the method terminates by merging over-segments into individual labelled point clusters. Experimental results indicate an average detection precision of 99.2%, recall of 98.3%, and F1-score of 98.7%. This is the first
method to achieve reliable detection performance in real bridge datasets. This sets a solid foundation for researchers attempting to derive rich IFC (Industry Foundation Classes) models from individual point clusters
From images to augmented 3D models: improved visual SLAM and augmented point cloud modeling
This thesis investigates into the problem of using monocular image sequences to generate augmented models. The problem is decomposed to two subproblems: monocular visual simultaneously localization and mapping (VSLAM), and the point cloud data modeling. Accordingly, the thesis comprises two major parts.
The First part, including Chapters 2, 3 and 4, aims to leverage the system observability theories to improve the VSLAM accuracy. In Chapter 2, a piece-wise linear system is developed to model VSLAM, and two necessary conditions are proved to make the VSLAM completely observable. Based on the First condition, an instantaneous condition for complete observability, the "Optimally Observable and Minimal Cardinality (OOMC) VSLAM" is presented in Chapter 3. The OOMC algorithm selects the feature subset of minimal required cardinality to form the strongest observable VSLAM subsystem. The select feature subset is further used to improve the data association in VSLAM. Based on the second condition, a temporal condition for complete observability, the "Good Features (GF) to Track for VSLAM" is presented in Chapter 4. The GF algorithm ranks the individual features according to their contributions to system observability. Benchmarking experiments of both OOMC and GF algorithms demonstrate improvements in VSLAM performance.
The second part, including Chapters 5 and 6, aims to solve the PCD modeling problem in a geometry-driven manner. Chapter 5 presents an algorithm to model PCDs with planar patches via a sparsity-inducing optimization. Chapter 6 extends the PCD modeling to quadratic surface primitives based models. A method is further developed to retrieve the high-level semantic information of the model components. Evaluation on the PCDs generated from VSLAM demonstrates the effectiveness of these geometry-driven PCD modeling approaches.Ph.D
State of research in automatic as-built modelling
This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research
Generating bridge geometric digital twins from point clouds
The automation of digital twinning for existing bridges from point clouds remains unsolved. Extensive manual effort is required to extract object point clusters from point clouds followed by fitting them with accurate 3D shapes. Previous research yielded methods that can automatically generate surface primitives combined
with rule-based classification to create labelled cuboids and cylinders. While these methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with realworld point clouds. In addition, bridge geometries,
defined with curved alignments and varying
elevations, are much more complicated than idealized cases. None of the existing methods can handle these difficulties reliably. The proposed framework employs
bridge engineering knowledge that mimics the
intelligence of human modellers to detect and model reinforced concrete bridge objects in imperfect point clouds. It directly produces labelled 3D objects in Industry Foundation Classes format without
generating low-level shape primitives. Experiments on ten bridge point clouds indicate the framework achieves an overall detection F1-score of 98.4%, an average modelling accuracy of 7.05 cm, and an
average modelling time of merely 37.8 seconds. This is the first framework of its kind to achieve high and reliable performance of geometric digital twin
generation of existing bridges
Recommended from our members
Detection of Structural Components in Point Clouds of Existing RC Bridges
The cost and effort of modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. There is a pressing need to automate this process. Previous research has achieved the automatic generation of surface primitives combined with rule-based classification to create labelled cuboids and cylinders from point clouds. While these methods work well in synthetic datasets or idealized cases, they encounter huge challenges when dealing with real-world bridge point clouds, which are often unevenly distributed and suffer from occlusions. In addition, real bridge geometries are complicated. In this paper, we propose a novel top-down method to tackle these challenges for detecting slab, pier, pier cap, and girder components in reinforced concrete bridges. This method uses a slicing algorithm to separate the deck assembly from pier assemblies. It then detects and segments pier caps using their surface normal, and girders using oriented bounding boxes and density histograms. Finally, our method merges over-segments into individually labelled point clusters. The results of 10 real-world bridge point cloud experiments indicate that our method achieves very high detection performance. This is the first method of its kind to achieve robust detection performance for the four component types in reinforced concrete bridges and to directly produce labelled point clusters. Our work provides a solid foundation for future work in generating rich Industry Foundation Classes models from the labelled point clusters
Generating bridge geometric digital twins from point clouds
The automation of digital twinning for existing bridges from point clouds remains unresolved. Previous research yielded methods that can generate surface primitives combined with rule-based classification to create labelled cuboids and cylinders. While these methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with real-world point clouds. The proposed framework employs bridge engineering knowledge that mimics the intelligence of human modellers to detect and model reinforced concrete bridge objects in imperfect point clouds. Experiments on ten bridge point clouds indicate the framework can achieve high and reliable performance of geometric digital twin generation of existing bridges.This research is funded by EPSRC, EU Infravation SeeBridge project under Grant No. 31109806.0007 and Trimble Research Fun
Long-Term Simultaneous Localization and Mapping in Dynamic Environments.
One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge.
The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time.
We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd
- …