380 research outputs found

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Extraction of main levels of a building from a large point cloud

    Get PDF
    Horizontal levels are references entities, the base of man-made environments. Their creation is the first step for various applications including the BIM (Building Information Modelling). BIM is an emerging methodology, widely used for new constructions, and increasingly applied to existing buildings (scan-to-BIM). The as-built BIM process is still mainly manual or semi-automatic and therefore is highly time-consuming. The automation of the as-built BIM is a challenging topic among the research community. This study is part of an ongoing research into the scan-to-BIM process regarding the extraction of the principal structure of a building. More specifically, here we present a strategy to automatically detect the building levels from a large point cloud obtained with a terrestrial laser scanner survey. The identification of the horizontal planes is the first indispensable step to produce an as-built BIM model. Our algorithm, developed in C++, is based on plane extraction by means of the RANSAC algorithm followed by the minimization of the quadrate sum of points-plane distance. Moreover, this paper will take an in-depth look at the influence of data resolution in the accuracy of plane extraction and at the necessary accuracy for the construction of a BIM model. A laser scanner survey of a three floors building composed by 36 scan stations has produced a point cloud of about 550 million points. The estimated plane parameters at different data resolution are analysed in terms of distance from the full points cloud resolution

    Reverse-engineering of architectural buildings based on an hybrid modeling approach

    Get PDF
    We thank MENSI and REALVIZ companies for their helpful comments and the following people for providing us images from their works: Francesca De Domenico (Fig. 1), Kyung-Tae Kim (Fig. 9). The CMN (French national center of patrimony buildings) is also acknowledged for the opportunity given to demonstrate our approach on the Hotel de Sully in Paris. We thank Tudor Driscu for his help on the English translation.This article presents a set of theoretical reflections and technical demonstrations that constitute a new methodological base for the architectural surveying and representation using computer graphics techniques. The problem we treated relates to three distinct concerns: the surveying of architectural objects, the construction and the semantic enrichment of their geometrical models, and their handling for the extraction of dimensional information. A hybrid approach to 3D reconstruction is described. This new approach combines range-based modeling and image-based modeling techniques; it integrates the concept of architectural feature-based modeling. To develop this concept set up a first process of extraction and formalization of architectural knowledge based on the analysis of architectural treaties is carried on. Then, the identified features are used to produce a template shape library. Finally the problem of the overall model structure and organization is addressed

    Digitizing and 3D Modeling of Urban Environments and Roads using Vehicle-Borne Laser Scanner System

    No full text
    International audience—In this paper we present a system for three-dimensional environment modeling. It consists of an instrumented vehicle equipped with a 2D laser range scanner for data mapping, and GPS, INS and odometers for vehicle positioning and attitude information. The advantage of this system is its ability to perform data acquisition during the vehicle navigation; the sensor needed being a basic 2D scanner with opposition to traditional expensive 3D sensors. This system integrates the laser raw range data with the vehicle's internal state estimator and is capable of reconstructing the 3D geometry of the environment by real-time geo-referencing. We propose a high level representation of the urban scene while identifying automatically and in real time some types of existing objects in this environment. Thus, our modeling is articulated around three principal axes, the segmentation; decimation; the 3D reconstruction and visualization. The road is the most important object for us; some road features like the curvature and the width are extracted

    Digital 3D documentation of cultural heritage sites based on terrestrial laser scanning

    Get PDF

    3D Road Environment Modeling Applied to Visibility Mapping: An Experimental Comparison

    Full text link

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Surface reconstruction for planning and navigation of liver resections

    Get PDF
    AbstractComputer-assisted systems for planning and navigation of liver resection procedures rely on the use of patient-specific 3D geometric models obtained from computed tomography. In this work, we propose the application of Poisson surface reconstruction (PSR) to obtain 3D models of the liver surface with applications to planning and navigation of liver surgery. In order to apply PSR, the introduction of an efficient transformation of the segmentation data, based on computation of gradient fields, is proposed. One of the advantages of PSR is that it requires only one control parameter, allowing the process to be fully automatic once the optimal value is estimated. Validation of our results is performed via comparison with 3D models obtained by state-of-art Marching Cubes incorporating Laplacian smoothing and decimation (MCSD). Our results show that PSR provides smooth liver models with better accuracy/complexity trade-off than those obtained by MCSD. After estimating the optimal parameter, automatic reconstruction of liver surfaces using PSR is achieved keeping similar processing time as MCSD. Models from this automatic approach show an average reduction of 79.59% of the polygons compared to the MCSD models presenting similar smoothness properties. Concerning visual quality, on one hand, and despite this reduction in polygons, clinicians perceive the quality of automatic PSR models to be the same as complex MCSD models. On the other hand, clinicians perceive a significant improvement on visual quality for automatic PSR models compared to optimal (obtained in terms of accuracy/complexity) MCSD models. The median reconstruction error using automatic PSR was as low as 1.03±0.23mm, which makes the method suitable for clinical applications. Automatic PSR is currently employed at Oslo University Hospital to obtain patient-specific liver models in selected patients undergoing laparoscopic liver resection

    3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets

    Get PDF
    This research deals with the issues concerning the processing, managing, representation for further dissemination of the big amount of 3D data today achievable and storable with the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused on the optimization process applied to 3D photogrammetric data of Cultural Heritage assets. Modern Geomatic techniques enable the acquisition and storage of a big amount of data, with high metric and radiometric accuracy and precision, also in the very close range field, and to process very detailed 3D textured models. Nowadays, the photogrammetric pipeline has well-established potentialities and it is considered one of the principal technique to produce, at low cost, detailed 3D textured models. The potentialities offered by high resolution and textured 3D models is today well-known and such representations are a powerful tool for many multidisciplinary purposes, at different scales and resolutions, from documentation, conservation and restoration to visualization and education. For example, their sub-millimetric precision makes them suitable for scientific studies applied to the geometry and materials (i.e. for structural and static tests, for planning restoration activities or for historical sources); their high fidelity to the real object and their navigability makes them optimal for web-based visualization and dissemination applications. Thanks to the improvement made in new visualization standard, they can be easily used as visualization interface linking different kinds of information in a highly intuitive way. Furthermore, many museums look today for more interactive exhibitions that may increase the visitors’ emotions and many recent applications make use of 3D contents (i.e. in virtual or augmented reality applications and through virtual museums). What all of these applications have to deal with concerns the issue deriving from the difficult of managing the big amount of data that have to be represented and navigated. Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them difficult to be handled by common and portable devices, published on the internet or managed in real time applications. Even though recent advances produce more and more sophisticated and capable hardware and internet standards, empowering the ability to easily handle, visualize and share such contents, other researches aim at define a common pipeline for the generation and optimization of 3D models with a reduced number of polygons, however able to satisfy detailed radiometric and geometric requests. iii This thesis is inserted in this scenario and focuses on the 3D modeling process of photogrammetric data aimed at their easy sharing and visualization. In particular, this research tested a 3D models optimization, a process which aims at the generation of Low Polygons models, with very low byte file size, processed starting from the data of High Poly ones, that nevertheless offer a level of detail comparable to the original models. To do this, several tools borrowed from the game industry and game engine have been used. For this test, three case studies have been chosen, a modern sculpture of a contemporary Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont- Italy). All the test cases have been surveyed by means of a close range photogrammetric acquisition and three high detailed 3D models have been generated by means of a Structure from Motion and image matching pipeline. On the final High Poly models generated, different optimization and decimation tools have been tested with the final aim to evaluate the quality of the information that can be extracted by the final optimized models, in comparison to those of the original High Polygon one. This study showed how tools borrowed from the Computer Graphic offer great potentialities also in the Cultural Heritage field. This application, in fact, may meet the needs of multipurpose and multiscale studies, using different levels of optimization, and this procedure could be applied to different kind of objects, with a variety of different sizes and shapes, also on multiscale and multisensor data, such as buildings, architectural complexes, data from UAV surveys and so on
    • …
    corecore