171 research outputs found

    Detection of key components of existing bridge in point cloud datasets

    Get PDF
    The cost and effort for modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. Automating the point cloud-to-Bridge Information Models process can drastically reduce the manual effort and cost involved. Previous research has achieved the automatic generation of surfaces primitives combined with rule-based classification to create labelled construction models from point clouds. These methods work very well in synthetic dataset or idealized cases. However, real bridge point clouds are often incomplete, and contain unevenly distributed points. Also, bridge geometries are complex. They are defined with horizontal alignments, vertical elevations and cross-sections. These characteristics are the reasons behind the performance issues existing methods have in real datasets. We propose to tackle this challenge via a novel top-down method for major bridge component detection in this paper. Our method bypasses the surface generation process altogether. Firstly, this method uses a slicing algorithm to separate deck assembly from pier assemblies. It then detects pier caps using their surface normal, and uses oriented bounding boxes and density histograms to segment the girders. Finally, the method terminates by merging over-segments into individual labelled point clusters. Experimental results indicate an average detection precision of 99.2%, recall of 98.3%, and F1-score of 98.7%. This is the first method to achieve reliable detection performance in real bridge datasets. This sets a solid foundation for researchers attempting to derive rich IFC (Industry Foundation Classes) models from individual point clusters

    From images to augmented 3D models: improved visual SLAM and augmented point cloud modeling

    Get PDF
    This thesis investigates into the problem of using monocular image sequences to generate augmented models. The problem is decomposed to two subproblems: monocular visual simultaneously localization and mapping (VSLAM), and the point cloud data modeling. Accordingly, the thesis comprises two major parts. The First part, including Chapters 2, 3 and 4, aims to leverage the system observability theories to improve the VSLAM accuracy. In Chapter 2, a piece-wise linear system is developed to model VSLAM, and two necessary conditions are proved to make the VSLAM completely observable. Based on the First condition, an instantaneous condition for complete observability, the "Optimally Observable and Minimal Cardinality (OOMC) VSLAM" is presented in Chapter 3. The OOMC algorithm selects the feature subset of minimal required cardinality to form the strongest observable VSLAM subsystem. The select feature subset is further used to improve the data association in VSLAM. Based on the second condition, a temporal condition for complete observability, the "Good Features (GF) to Track for VSLAM" is presented in Chapter 4. The GF algorithm ranks the individual features according to their contributions to system observability. Benchmarking experiments of both OOMC and GF algorithms demonstrate improvements in VSLAM performance. The second part, including Chapters 5 and 6, aims to solve the PCD modeling problem in a geometry-driven manner. Chapter 5 presents an algorithm to model PCDs with planar patches via a sparsity-inducing optimization. Chapter 6 extends the PCD modeling to quadratic surface primitives based models. A method is further developed to retrieve the high-level semantic information of the model components. Evaluation on the PCDs generated from VSLAM demonstrates the effectiveness of these geometry-driven PCD modeling approaches.Ph.D

    State of research in automatic as-built modelling

    Get PDF
    This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research

    Generating bridge geometric digital twins from point clouds

    Get PDF
    The automation of digital twinning for existing bridges from point clouds remains unsolved. Extensive manual effort is required to extract object point clusters from point clouds followed by fitting them with accurate 3D shapes. Previous research yielded methods that can automatically generate surface primitives combined with rule-based classification to create labelled cuboids and cylinders. While these methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with realworld point clouds. In addition, bridge geometries, defined with curved alignments and varying elevations, are much more complicated than idealized cases. None of the existing methods can handle these difficulties reliably. The proposed framework employs bridge engineering knowledge that mimics the intelligence of human modellers to detect and model reinforced concrete bridge objects in imperfect point clouds. It directly produces labelled 3D objects in Industry Foundation Classes format without generating low-level shape primitives. Experiments on ten bridge point clouds indicate the framework achieves an overall detection F1-score of 98.4%, an average modelling accuracy of 7.05 cm, and an average modelling time of merely 37.8 seconds. This is the first framework of its kind to achieve high and reliable performance of geometric digital twin generation of existing bridges

    Generating bridge geometric digital twins from point clouds

    Get PDF
    The automation of digital twinning for existing bridges from point clouds remains unresolved. Previous research yielded methods that can generate surface primitives combined with rule-based classification to create labelled cuboids and cylinders. While these methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with real-world point clouds. The proposed framework employs bridge engineering knowledge that mimics the intelligence of human modellers to detect and model reinforced concrete bridge objects in imperfect point clouds. Experiments on ten bridge point clouds indicate the framework can achieve high and reliable performance of geometric digital twin generation of existing bridges.This research is funded by EPSRC, EU Infravation SeeBridge project under Grant No. 31109806.0007 and Trimble Research Fun

    Long-Term Simultaneous Localization and Mapping in Dynamic Environments.

    Full text link
    One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge. The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time. We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd
    • …
    corecore