404 research outputs found

    AUTOMATIC FAÇADE SEGMENTATION FOR THERMAL RETROFIT

    Get PDF
    Abstract. In this paper we present an automated method to derive highly detailed 3D vector models of modern building facades from terrestrial laser scanning data. The developed procedure can be divided into two main steps: firstly the main elements constituting the facade are identified by means of a segmentation process, then the 3D vector model is generated including some priors on architectural scenes. The identification of main facade elements is based on random sampling and detection of planar elements including topology information in the process to reduce under- and over-segmentation problems. Finally, the prevalence of straight lines and orthogonal intersections in the vector model generation phase is exploited to set additional constraints to enforce automated modeling. Contemporary a further classification is performed, enriching the data with semantics by means of a classification tree. The main application field for these vector models is the design of external insulation thermal retrofit. In particular, in this paper we present a possible application for energy efficiency evaluation of buildings by mean of Infrared Thermography data overlaid to the facade model

    Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

    Get PDF
    International audienceThis paper presents a method for the 3D reconstruction of a piecewise-planar surface from range images, typi-cally laser scans with millions of points. The reconstructed surface is a watertight polygonal mesh that conforms to observations at a given scale in the visible planar parts of the scene, and that is plausible in hidden parts. We formulate surface reconstruction as a discrete optimization problem based on detected and hypothesized planes. One of our major contributions, besides a treatment of data anisotropy and novel surface hypotheses, is a regu-larization of the reconstructed surface w.r.t. the length of edges and the number of corners. Compared to classical area-based regularization, it better captures surface complexity and is therefore better suited for man-made en-vironments, such as buildings. To handle the underlying higher-order potentials, that are problematic for MRF optimizers, we formulate minimization as a sparse mixed-integer linear programming problem and obtain an ap-proximate solution using a simple relaxation. Experiments show that it is fast and reaches near-optimal solutions

    Automated body volume acquisitions from 3D structured-light scanning

    Get PDF
    Whole-body volumes and segmental volumes are highly related to the health and medical condition of individuals. However, the traditional manual post-processing of raw 3D scanned data is time-consuming and needs technical expertise. The purpose of this study was to develop bespoke software for obtaining whole-body volumes and segmental volumes from raw 3D scanned data automatically and to establish its accuracy and reliability. The bespoke software applied Stitched Puppet model fitting techniques to deform template models to fit the 3D raw scanned data to identify the segmental endpoints and determine their locations. Finally, the bespoke software used the location information of segmental endpoints to set segmental boundaries on the reconstructed meshes and to calculate body volume. The whole-body volumes and segmental volumes (head & neck, torso, arms, and legs) of 29 participants processed by the traditional manual operation were regarded as the references and compared to the measurements obtained with the bespoke software using the intra-method and inter-method relative technical errors of measurement. The results showed that the errors in whole-body volumes and most segmental volumes acquired from the bespoke software were less than 5%. Overall, the bespoke software developed in this study can complete the post-processing tasks without any technical expertise, and the obtained whole-body volumes and segmental volumes can achieve good accuracy for some applications in health and medicine

    Semantizing Complex 3D Scenes using Constrained Attribute Grammars

    Get PDF
    International audienceWe propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations -- in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data

    Learning to Construct 3D Building Wireframes from 3D Line Clouds

    Full text link
    Line clouds, though under-investigated in the previous work, potentially encode more compact structural information of buildings than point clouds extracted from multi-view images. In this work, we propose the first network to process line clouds for building wireframe abstraction. The network takes a line cloud as input , i.e., a nonstructural and unordered set of 3D line segments extracted from multi-view images, and outputs a 3D wireframe of the underlying building, which consists of a sparse set of 3D junctions connected by line segments. We observe that a line patch, i.e., a group of neighboring line segments, encodes sufficient contour information to predict the existence and even the 3D position of a potential junction, as well as the likelihood of connectivity between two query junctions. We therefore introduce a two-layer Line-Patch Transformer to extract junctions and connectivities from sampled line patches to form a 3D building wireframe model. We also introduce a synthetic dataset of multi-view images with ground-truth 3D wireframe. We extensively justify that our reconstructed 3D wireframe models significantly improve upon multiple baseline building reconstruction methods. The code and data can be found at https://github.com/Luo1Cheng/LC2WF.Comment: 10 pages, 6 figure

    Appearance stylization of Manhattan world buildings

    Get PDF
    • 

    corecore