18,614 research outputs found

    Towards building information modelling for existing structures

    Get PDF
    The transformation of cities from the industrial age (unsustainable) to the knowledge age (sustainable) is essentially a ‘whole life cycle’ process consisting of; planning, development, operation, reuse and renewal. During this transformation, a multi-disciplinary knowledge base, created from studies and research about the built environment aspects is fundamental: historical, architectural, archeologically, environmental, social, economic, etc is critical. Although there are a growing number of applications of 3D VR modelling applications, some built environment applications such as disaster management, environmental simulations, computer aided architectural design and planning require more sophisticated models beyond 3D graphical visualization such as multifunctional, interoperable, intelligent, and multi-representational. Advanced digital mapping technologies such as 3D laser scanner technologies can be are enablers for effective e-planning, consultation and communication of users’ views during the planning, design, construction and lifecycle process of the built environment. For example, the 3D laser scanner enables digital documentation of buildings, sites and physical objects for reconstruction and restoration. It also facilitates the creation of educational resources within the built environment, as well as the reconstruction of the built environment. These technologies can be used to drive the productivity gains by promoting a free-flow of information between departments, divisions, offices, and sites; and between themselves, their contractors and partners when the data captured via those technologies are processed and modelled into BIM (Building Information Modelling). The use of these technologies is key enablers to the creation of new approaches to the ‘Whole Life Cycle’ process within the built and human environment for the 21st century. The paper describes the research towards Building Information Modelling for existing structures via the point cloud data captured by the 3D laser scanner technology. A case study building is elaborated to demonstrate how to produce 3D CAD models and BIM models of existing structures based on designated technique

    Relational Reasoning Network (RRN) for Anatomical Landmarking

    Full text link
    Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for craniomaxillofacial (CMF) bones. Available methods require segmentation of the object of interest for precise landmarking. Unlike those, our purpose in this study is to perform anatomical landmarking using the inherent relation of CMF bones without explicitly segmenting them. We propose a new deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations of the landmarks. Specifically, we are interested in learning landmarks in CMF region: mandible, maxilla, and nasal bones. The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units and without the need for segmentation. For a given a few landmarks as input, the proposed system accurately and efficiently localizes the remaining landmarks on the aforementioned bones. For a comprehensive evaluation of RRN, we used cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system identifies the landmark locations very accurately even when there are severe pathologies or deformations in the bones. The proposed RRN has also revealed unique relationships among the landmarks that help us infer several reasoning about informativeness of the landmark points. RRN is invariant to order of landmarks and it allowed us to discover the optimal configurations (number and location) for landmarks to be localized within the object of interest (mandible) or nearby objects (maxilla and nasal). To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table

    State of research in automatic as-built modelling

    Get PDF
    This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research
    corecore