80 research outputs found
Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources
Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogrà fic i Geològic de Catalunya.
In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments.
Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families:
- Time-of-flight (terrestrial and aerial LiDAR).
- Photogrammetry (street-level, satellite, and aerial imagery).
- Human-edited vector data (cadastre and other map sources).
Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort.
Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking.
In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information.
Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions:
- Effective and feature-preserving simplification of massive point clouds.
- Developing normal estimation algorithms explicitly designed for LiDAR data.
- Low-stretch panoramic representation for point clouds.
- Semantic analysis of street-level imagery for improved multi-view stereo reconstruction.
- Color improvement through heuristic techniques and the registration of LiDAR and imagery data.
- Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogrà fic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una à mplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famÃlies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'à rea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar à rees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant lÃnies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen à rees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una cà mera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explÃcitament per a dades LiDAR. - Representació panorà mica de baixa distorsió per a núvols de punts. - Anà lisi semà ntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurÃstiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges
A Surface Reconstruction Method for In-Detail Underwater 3D Optical Mapping
International audienceUnderwater range scanning techniques are starting to gain interest in underwater exploration, providing new tools to represent the seafloor. These scans (often) acquired by underwater robots usually result in an unstructured point cloud, but given the common downward-looking or forward-looking configuration of these sensors with respect to the scene, the problem of recovering a piecewise linear approximation representing the scene is normally solved by approximating these 3D points using a heightmap (2.5D). Nevertheless, this representation is not able to correctly represent complex structures, especially those presenting arbitrary concavities normally exhibited in underwater objects. We present a method devoted to full 3D surface reconstruction that does not assume any specific sensor configuration. The method presented is robust to common defects in raw scanned data such as outliers and noise often present in extreme environments such as underwater, both for sonar and optical surveys. Moreover, the proposed method does not need a manual preprocessing step. It is also generic as it does not need any information other than the points themselves to work. This property leads to its wide application to any kind of range scanning technologies and we demonstrate its versatility by using it on synthetic data, controlled laser-scans, and multibeam sonar surveys. Finally, and given the unbeatable level of detail that optical methods can provide, we analyze the application of this method on optical datasets related to biology, geology and archeology
Advances in 3D reconstruction
La tesi affronta il problema della ricostruzione di scene tridimensionali a partire da insiemi non strutturati di fotografie delle stesse. Lo stato dell'arte viene avanzato su diversi fronti: il primo contributo consiste in una formulazione robusta del problema di struttura e moto basata su di un approccio gerarchico, contrariamente a quello sequenziale prevalente in letteratura. Questa metodologia abbatte di un ordine di grandezza il costo computazionale complessivo, risulta inerentemente parallelizzabile, minimizza il progressivo accumulo degli errori e elimina la cruciale dipendenza dalla scelta della coppia di viste iniziale comune a tutte le formulazioni concorrenti. Un secondo contributo consiste nello sviluppo di una nuova procedura di autocalibrazione, particolarmente robusta e adatta al contesto del problema di moto e struttura. La soluzione proposta consiste in una procedura in forma chiusa per il recupero del piano all'infinito data una stima dei parametri intrinseci di almeno due camere. Questo metodo viene utilizzato per la ricerca esaustiva dei parametri interni, il cui spazio di ricerca Šstrutturalmente limitato dalla finitezza dei dispositivi di acquisizione. Si Šindagato infine come visualizzare in maniera efficiente e gradevole i risultati di ricostruzione ottenuti: a tale scopo sono stati sviluppati algoritmi per il calcolo della disparit… stereo e procedure per la visualizzazione delle ricostruzione come insiemi di piani tessiturati automaticamente estratti, ottenendo una rappresentazione fedele, compatta e semanticamente significativa. Ogni risultato Šstato corredato da una validazione sperimentale rigorosa, con verifiche sia qualitative che quantitative.The thesis tackles the problem of 3D reconstruction of scenes from unstructured picture datasets. State of the art is advanced on several aspects: the first contribute consists in a robust formulation of the structure and motion problem based on a hierarchical approach, as opposed to the sequential one prevalent in literature. This methodology reduces the total computational complexity by one order of magnitude, is inherently parallelizable, minimizes the error accumulation causing drift and eliminates the crucial dependency from the choice of the initial couple of views which is common to all competing approaches. A second contribute consists in the discovery of a novel slef-calibration procedure, very robust and tailored to the structure and motion task. The proposed solution is a closed-form procedure for the recovery of the plane at infinity given a rough estimate of focal parameters of at least two cameras. This method is employed for the exaustive search of internal parameters, whise space is inherently bounded from the finiteness of acquisition devices. Finally, we inevstigated how to visualize in a efficient and compelling way the obtained reconstruction results: to this effect several algorithms for the computation of stereo disparity are presented. Along with procedures for the automatic extraction of support planes, they have been employed to obtain a faithful, compact and semantically significant representation of the scene as a collection of textured planes, eventually augmented by depth information encoded in relief maps. Every result has been verified by a rigorous experimental validation, comprising both qualitative and quantitative comparisons
Doctor of Philosophy
dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection
Calculating the curvature shape characteristics of the human body from 3D scanner data.
In the recent years, there have been significant advances in the development and manufacturing of 3D scanners capable of capturing detailed (external) images of whole human bodies. Such hardware offers the opportunity to collect information that could be used to describe, interpret and analyse the shape of the human body for a variety of applications where shape information plays a vital role (e.g. apparel sizing and customisation; medical research in fields such as nutrition, obesity/anorexia and perceptive psychology; ergonomics for vehicle and furniture design). However, the representations delivered by such hardware typically consist of unstructured or partially structured point clouds, whereas it would be desirable to have models that allow shape-related information to be more immediately accessible. This thesis describes a method of extracting the differential geometry properties of the body surface from unorganized point cloud datasets. In effect, this is a way of constructing curvature maps that allows the detection on the surface of features that are deformable (such as ridges) rather than reformable under certain transformations. Such features could subsequently be used to interpret the topology of a human body and to enable classification according to its shape, rather than its size (as is currently the standard practice for many of the applications concemed). The background, motivation and significance of this research are presented in chapter one. Chapter two is a literature review describing the previous and current attempts to model 3D objects in general and human bodies in particular, as well as the mathematical and technical issues associated with the modelling. Chapter three presents an overview of: the methodology employed throughout the research; the assumptions regarding the data to be processed; and the strategy for evaluating the results for each stage of the methodology. Chapter four describes an algorithm (and some variations) for approximating the local surface geometry around a given point of the input data set by means of a least-squares minimization. The output of such an algorithm is a surface patch described in an analytic (implicit) form. This is necessary for the next step described below. The case is made for using implicit surfaces rather than more popular 3D surface representations such as parametric forms or height functions. Chapter five describes the processing needed for calculating curvature-related characteristics for each point of the input surface. This utilises the implicit surface patches generated by the algorithm described in the previous chapter, and enables the construction of a "curvature map" of the original surface, which incorporates rich information such as the principal curvatures, shape indices and curvature directions. Chapter six describes a family of algorithms for calculating features such as ridges and umbilic points on the surface from the curvature map, in a manner that bypasses the problem of separating a vector field (i.e. the principal curvature directions) across the entire surface of an object. An alternative approach, using the focal surface information, is also considered briefly in comparison. The concluding chapter summarises the results from all steps of the processing and evaluates them in relation to the requirements set in chapter one. Directions for further research are also proposed
State of research in automatic as-built modelling
This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research
A Survey of Surface Reconstruction from Point Clouds
International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction
Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects
The realism of a scene basically depends on the quality of the geometry, the
illumination and the materials that are used. Whereas many sources for
the creation of three-dimensional geometry exist and numerous algorithms
for the approximation of global illumination were presented, the acquisition
and rendering of realistic materials remains a challenging problem.
Realistic materials are very important in computer graphics, because
they describe the reflectance properties of surfaces, which are based on the
interaction of light and matter. In the real world, an enormous diversity of
materials can be found, comprising very different properties. One important
objective in computer graphics is to understand these processes, to formalize
them and to finally simulate them.
For this purpose various analytical models do already exist, but their
parameterization remains difficult as the number of parameters is usually
very high. Also, they fail for very complex materials that occur in the real
world. Measured materials, on the other hand, are prone to long acquisition
time and to huge input data size. Although very efficient statistical
compression algorithms were presented, most of them do not allow for editability,
such as altering the diffuse color or mesostructure. In this thesis,
a material representation is introduced that makes it possible to edit these
features. This makes it possible to re-use the acquisition results in order to
easily and quickly create deviations of the original material. These deviations
may be subtle, but also substantial, allowing for a wide spectrum of
material appearances.
The approach presented in this thesis is not based on compression, but on
a decomposition of the surface into several materials with different reflection
properties. Based on a microfacette model, the light-matter interaction is
represented by a function that can be stored in an ordinary two-dimensional
texture. Additionally, depth information, local rotations, and the diffuse
color are stored in these textures. As a result of the decomposition, some
of the original information is inevitably lost, therefore an algorithm for the
efficient simulation of subsurface scattering is presented as well.
Another contribution of this work is a novel perception-based simplification
metric that includes the material of an object. This metric comprises
features of the human visual system, for example trichromatic color
perception or reduced resolution. The proposed metric allows for a more
aggressive simplification in regions where geometric metrics do not simplif
3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets
This research deals with the issues concerning the processing, managing, representation
for further dissemination of the big amount of 3D data today achievable and storable with
the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused
on the optimization process applied to 3D photogrammetric data of Cultural Heritage
assets.
Modern Geomatic techniques enable the acquisition and storage of a big amount of data,
with high metric and radiometric accuracy and precision, also in the very close range
field, and to process very detailed 3D textured models. Nowadays, the photogrammetric
pipeline has well-established potentialities and it is considered one of the principal
technique to produce, at low cost, detailed 3D textured models.
The potentialities offered by high resolution and textured 3D models is today well-known
and such representations are a powerful tool for many multidisciplinary purposes, at
different scales and resolutions, from documentation, conservation and restoration to
visualization and education. For example, their sub-millimetric precision makes them
suitable for scientific studies applied to the geometry and materials (i.e. for structural and
static tests, for planning restoration activities or for historical sources); their high fidelity
to the real object and their navigability makes them optimal for web-based visualization
and dissemination applications. Thanks to the improvement made in new visualization
standard, they can be easily used as visualization interface linking different kinds of
information in a highly intuitive way. Furthermore, many museums look today for more
interactive exhibitions that may increase the visitors’ emotions and many recent
applications make use of 3D contents (i.e. in virtual or augmented reality applications and
through virtual museums).
What all of these applications have to deal with concerns the issue deriving from the
difficult of managing the big amount of data that have to be represented and navigated.
Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them
difficult to be handled by common and portable devices, published on the internet or
managed in real time applications. Even though recent advances produce more and more
sophisticated and capable hardware and internet standards, empowering the ability to
easily handle, visualize and share such contents, other researches aim at define a common
pipeline for the generation and optimization of 3D models with a reduced number of
polygons, however able to satisfy detailed radiometric and geometric requests.
iii
This thesis is inserted in this scenario and focuses on the 3D modeling process of
photogrammetric data aimed at their easy sharing and visualization. In particular, this
research tested a 3D models optimization, a process which aims at the generation of Low
Polygons models, with very low byte file size, processed starting from the data of High
Poly ones, that nevertheless offer a level of detail comparable to the original models. To
do this, several tools borrowed from the game industry and game engine have been used.
For this test, three case studies have been chosen, a modern sculpture of a contemporary
Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of
Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont-
Italy). All the test cases have been surveyed by means of a close range photogrammetric
acquisition and three high detailed 3D models have been generated by means of a
Structure from Motion and image matching pipeline. On the final High Poly models
generated, different optimization and decimation tools have been tested with the final aim
to evaluate the quality of the information that can be extracted by the final optimized
models, in comparison to those of the original High Polygon one. This study showed how
tools borrowed from the Computer Graphic offer great potentialities also in the Cultural
Heritage field. This application, in fact, may meet the needs of multipurpose and
multiscale studies, using different levels of optimization, and this procedure could be
applied to different kind of objects, with a variety of different sizes and shapes, also on
multiscale and multisensor data, such as buildings, architectural complexes, data from
UAV surveys and so on
- …