70 research outputs found
Surface Reconstruction from Unorganized Point Cloud Data via Progressive Local Mesh Matching
This thesis presents an integrated triangle mesh processing framework for surface reconstruction based on Delaunay triangulation. It features an innovative multi-level inheritance priority queuing mechanism for seeking and updating the optimum local manifold mesh at each data point. The proposed algorithms aim at generating a watertight triangle mesh interpolating all the input points data when all the fully matched local manifold meshes (umbrellas) are found. Compared to existing reconstruction algorithms, the proposed algorithms can automatically reconstruct watertight interpolation triangle mesh without additional hole-filling or manifold post-processing. The resulting surface can effectively recover the sharp features in the scanned physical object and capture their correct topology and geometric shapes reliably. The main Umbrella Facet Matching (UFM) algorithm and its two extended algorithms are documented in detail in the thesis. The UFM algorithm accomplishes and implements the core surface reconstruction framework based on a multi-level inheritance priority queuing mechanism according to the progressive matching results of local meshes. The first extended algorithm presents a new normal vector combinatorial estimation method for point cloud data depending on local mesh matching results, which is benefit to sharp features reconstruction. The second extended algorithm addresses the sharp-feature preservation issue in surface reconstruction by the proposed normal vector cone (NVC) filtering. The effectiveness of these algorithms has been demonstrated using both simulated and real-world point cloud data sets. For each algorithm, multiple case studies are performed and analyzed to validate its performance
Multi-View Stereo with Single-View Semantic Mesh Refinement
While 3D reconstruction is a well-established and widely explored research
topic, semantic 3D reconstruction has only recently witnessed an increasing
share of attention from the Computer Vision community. Semantic annotations
allow in fact to enforce strong class-dependent priors, as planarity for ground
and walls, which can be exploited to refine the reconstruction often resulting
in non-trivial performance improvements. State-of-the art methods propose
volumetric approaches to fuse RGB image data with semantic labels; even if
successful, they do not scale well and fail to output high resolution meshes.
In this paper we propose a novel method to refine both the geometry and the
semantic labeling of a given mesh. We refine the mesh geometry by applying a
variational method that optimizes a composite energy made of a state-of-the-art
pairwise photo-metric term and a single-view term that models the semantic
consistency between the labels of the 3D mesh and those of the segmented
images. We also update the semantic labeling through a novel Markov Random
Field (MRF) formulation that, together with the classical data and smoothness
terms, takes into account class-specific priors estimated directly from the
annotated mesh. This is in contrast to state-of-the-art methods that are
typically based on handcrafted or learned priors. We are the first, jointly
with the very recent and seminal work of [M. Blaha et al arXiv:1706.08336,
2017], to propose the use of semantics inside a mesh refinement framework.
Differently from [M. Blaha et al arXiv:1706.08336, 2017], which adopts a more
classical pairwise comparison to estimate the flow of the mesh, we apply a
single-view comparison between the semantically annotated image and the current
3D mesh labels; this improves the robustness in case of noisy segmentations.Comment: {\pounds}D Reconstruction Meets Semantic, ICCV worksho
Facetwise Mesh Refinement for Multi-View Stereo
Mesh refinement is a fundamental step for accurate Multi-View Stereo. It
modifies the geometry of an initial manifold mesh to minimize the photometric
error induced in a set of camera pairs. This initial mesh is usually the output
of volumetric 3D reconstruction based on min-cut over Delaunay Triangulations.
Such methods produce a significant amount of non-manifold vertices, therefore
they require a vertex split step to explicitly repair them. In this paper, we
extend this method to preemptively fix the non-manifold vertices by reasoning
directly on the Delaunay Triangulation and avoid most vertex splits. The main
contribution of this paper addresses the problem of choosing the camera pairs
adopted by the refinement process. We treat the problem as a mesh labeling
process, where each label corresponds to a camera pair. Differently from the
state-of-the-art methods, which use each camera pair to refine all the visible
parts of the mesh, we choose, for each facet, the best pair that enforces both
the overall visibility and coverage. The refinement step is applied for each
facet using only the camera pair selected. This facetwise refinement helps the
process to be applied in the most evenly way possible.Comment: Accepted as Oral ICPR202
A Concept For Surface Reconstruction From Digitised Data
Reverse engineering and in particular the reconstruction of surfaces from digitized
data is an important task in industry. With the development of new digitizing technologies
such as laser or photogrammetry, real objects can be measured or digitized
quickly and cost effectively. The result of the digitizing process is a set of discrete
3D sample points. These sample points have to be converted into a mathematical,
continuous surface description, which can be further processed in different computer
applications. The main goal of this work is to develop a concept for such a computer
aided surface generation tool, that supports the new scanning technologies and meets
the requirements in industry towards such a product.
Therefore first, the requirements to be met by a surface reconstruction tool are
determined. This marketing study has been done by analysing different departments
of several companies. As a result, a catalogue of requirements is developed. The
number of tasks and applications shows the importance of a fast and precise computer
aided reconstruction tool in industry. The main result from the analysis is, that
many important applications such as stereolithographie, copy milling etc. are based
on triangular meshes or they are able to handle these polygonal surfaces.
Secondly the digitizer, currently available on the market and used in industry are
analysed. Any scanning system has its strength and weaknesses. A typical problem
in digitizing is, that some areas of a model cannot be digitized due to occlusion or
obstruction. The systems are also different in terms of accuracy, flexibility etc. The
analysis of the systems leads to a second catalogue of requirements and tasks, which
have to be solved in order to provide a complete and effective software tool. The analysis
also shows, that the reconstruction problem cannot be solved fully automatically
due to many limitations of the scanning technologies.
Based on the two requirements, a concept for a software tool in order to process digitized data is developed and presented. The concept is restricted to the generation
of polygonal surfaces. It combines automatic processes, such as the generation of
triangular meshes from digitized data, as well as user interactive tools such as the
reconstruction of sharp corners or the compensation of the scanning probe radius in
tactile measured data.
The most difficult problem in this reconstruction process is the automatic generation
of a surface from discrete measured sample points. Hence, an algorithm for
generating triangular meshes from digitized data has been developed. The algorithm
is based on the principle of multiple view combination. The proposed approach is able
to handle large numbers of data points (examples with up to 20 million data points
were processed). Two pre-processing algorithm for triangle decimation and surface
smoothing are also presented and part of the mesh generation process. Several practical
examples, which show the effectiveness, robustness and reliability of the algorithm
are presented
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
Od 3D poÄŤĂtaÄŤovĂ©ho modelovánĂ k realitÄ› a zpÄ›t
TechnologickĂ˝ pokrok neustále zrychluje a v jeho popĹ™edĂ se hĹ™ejĂ technologie spojenĂ© s 3D poÄŤĂtaÄŤovĂ˝m modelovánĂm, jako tĹ™eba 3D tisk, skenovánĂ a technologie spojenĂ© s virtuálnÄ› rozšĂĹ™enou realitou. Tato práce dává nahlĂ©dnout za závoj tajemna, jeĹľ tyto technologie tak ÄŤasto obklopuje. CĂlem je nabĂdnout krátkĂ˝ pohled na kaĹľdou ze zmĂnÄ›nĂ˝ch oblastĂ. ÄŚtenář bude mĂt moĹľnost na vše nahlĂ©dnout z pohledu praktickĂ©ho, teoretickĂ©ho i trochu ajťáckĂ©ho. To vše v nadÄ›ji, Ĺľe se podařà tyto tĹ™i tak ÄŤasto vzdálenĂ© náhledy pĹ™ivĂ©st blĂĹľ jak k sobÄ› vzájemnÄ›, tak ke ÄŤtenáři.Technological advancements are faster than ever and on the frontier are applications and mechanisms entwined with 3D computer aided modelling, such as 3D printing, scan- ning and extended reality technologies. This work gives a peek behind the veil of mystery surrounding these technologies. We aim to give a brief look into each of the mentioned areas and let the reader experience them practically, mathematically and algorithmically in hope to bring these three so often separated views closer together and to the reader. 1Department of Mathematics EducationKatedra didaktiky matematikyMatematicko-fyzikálnĂ fakultaFaculty of Mathematics and Physic
Automatic 3d modeling of environments (a sparse approach from images taken by a catadioptric camera)
La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques.The automatic 3d modeling of an environment using images is still an active topic in Computer Vision. Standard methods have three steps : moving a camera in the environment to take an image sequence, reconstructing the geometry of the environment, and applying a dense stereo method to obtain a surface model of the environment. In the second step, interest points are detected and matched in images, then camera poses and a sparse cloud of 3d points corresponding to the interest points are simultaneously estimated. In the third step, all pixels of images are used to reconstruct a surface of the environment, e.g. by estimating a dense cloud of 3d points. Here we propose to generate a surface directly from the sparse point cloud and its visibility information provided by the geometry reconstruction step. The advantages are low time and space complexities ; this is useful e.g. for obtaining compact models of large and complete environments like a city. To do so, a surface reconstruction method by sculpting 3d Delaunay triangulation of the reconstructed points is proposed.The visibility information is used to classify the tetrahedra in free-space and matter. Then a surface is extracted thanks to a greedy method and a minority of Steiner points. The 2-manifold constraint is enforced on the surface to allow standard surface post-processing such as denoising, refinement by photo-consistency optimization ... This method is also extended to the incremental case : each time a new key-frame is selected in the input video, new 3d points and camera pose are estimated, then the reconstructed surface is updated.We study the time complexity in both cases (incremental or not). In experiments, a low-cost catadioptric camera is used to generate textured 3d models for complete environments including buildings, ground, vegetation ... A drawback of our methods is that thin scene components cannot be correctly reconstructed, e.g. tree branches and electric posts.CLERMONT FD-Bib.électronique (631139902) / SudocSudocFranceF
- …