95 research outputs found
A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds
This paper proposes a segmentation-free, automatic and efficient procedure to
detect general geometric quadric forms in point clouds, where clutter and
occlusions are inevitable. Our everyday world is dominated by man-made objects
which are designed using 3D primitives (such as planes, cones, spheres,
cylinders, etc.). These objects are also omnipresent in industrial
environments. This gives rise to the possibility of abstracting 3D scenes
through primitives, thereby positions these geometric forms as an integral part
of perception and high level 3D scene understanding.
As opposed to state-of-the-art, where a tailored algorithm treats each
primitive type separately, we propose to encapsulate all types in a single
robust detection procedure. At the center of our approach lies a closed form 3D
quadric fit, operating in both primal & dual spaces and requiring as low as 4
oriented-points. Around this fit, we design a novel, local null-space voting
strategy to reduce the 4-point case to 3. Voting is coupled with the famous
RANSAC and makes our algorithm orders of magnitude faster than its conventional
counterparts. This is the first method capable of performing a generic
cross-type multi-object primitive detection in difficult scenes. Results on
synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201
Generic Primitive Detection in Point Clouds Using Novel Minimal Quadric Fits
We present a novel and effective method for detecting 3D primitives in
cluttered, unorganized point clouds, without axillary segmentation or type
specification. We consider the quadric surfaces for encapsulating the basic
building blocks of our environments - planes, spheres, ellipsoids, cones or
cylinders, in a unified fashion. Moreover, quadrics allow us to model higher
degree of freedom shapes, such as hyperboloids or paraboloids that could be
used in non-rigid settings.
We begin by contributing two novel quadric fits targeting 3D point sets that
are endowed with tangent space information. Based upon the idea of aligning the
quadric gradients with the surface normals, our first formulation is exact and
requires as low as four oriented points. The second fit approximates the first,
and reduces the computational effort. We theoretically analyze these fits with
rigor, and give algebraic and geometric arguments. Next, by re-parameterizing
the solution, we devise a new local Hough voting scheme on the null-space
coefficients that is combined with RANSAC, reducing the complexity from
to (three points). To the best of our knowledge, this is the
first method capable of performing a generic cross-type multi-object primitive
detection in difficult scenes without segmentation. Our extensive qualitative
and quantitative results show that our method is efficient and flexible, as
well as being accurate.Comment: Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (T-PAMI). arXiv admin note: substantial text overlap with
arXiv:1803.0719
A Survey of Methods for Converting Unstructured Data to CSG Models
The goal of this document is to survey existing methods for recovering CSG
representations from unstructured data such as 3D point-clouds or polygon
meshes. We review and discuss related topics such as the segmentation and
fitting of the input data. We cover techniques from solid modeling and CAD for
polyhedron to CSG and B-rep to CSG conversion. We look at approaches coming
from program synthesis, evolutionary techniques (such as genetic programming or
genetic algorithm), and deep learning methods. Finally, we conclude with a
discussion of techniques for the generation of computer programs representing
solids (not just CSG models) and higher-level representations (such as, for
example, the ones based on sketch and extrusion or feature based operations).Comment: 29 page
PCN: Point Completion Network
Shape completion, the problem of estimating the complete geometry of objects
from partial observations, lies at the core of many vision and robotics
applications. In this work, we propose Point Completion Network (PCN), a novel
learning-based approach for shape completion. Unlike existing shape completion
methods, PCN directly operates on raw point clouds without any structural
assumption (e.g. symmetry) or annotation (e.g. semantic class) about the
underlying shape. It features a decoder design that enables the generation of
fine-grained completions while maintaining a small number of parameters. Our
experiments show that PCN produces dense, complete point clouds with realistic
structures in the missing regions on inputs with various levels of
incompleteness and noise, including cars from LiDAR scans in the KITTI dataset.Comment: 3DV 2018 oral. Honorable mention for Best Paper awar
High-level environment representations for mobile robots
In most robotic applications we are faced with the problem of building
a digital representation of the environment that allows the robot to
autonomously complete its tasks. This internal representation can be
used by the robot to plan a motion trajectory for its mobile base
and/or end-effector. For most man-made environments we do not have
a digital representation or it is inaccurate. Thus, the robot must
have the capability of building it autonomously. This is done by
integrating into an internal data structure incoming sensor
measurements. For this purpose, a common solution consists in solving
the Simultaneous Localization and Mapping (SLAM) problem. The map
obtained by solving a SLAM problem is called ``metric'' and it
describes the geometric structure of the environment. A metric map is
typically made up of low-level primitives (like points or
voxels). This means that even though it represents the shape of the
objects in the robot workspace it lacks the information of which
object a surface belongs to. Having an object-level representation of
the environment has the advantage of augmenting the set of possible
tasks that a robot may accomplish. To this end, in this thesis we
focus on two aspects. We propose a formalism to represent in a uniform
manner 3D scenes consisting of different geometric primitives,
including points, lines and planes. Consequently, we derive a local
registration and a global optimization algorithm that can exploit this
representation for robust estimation. Furthermore, we present a
Semantic Mapping system capable of building an \textit{object-based}
map that can be used for complex task planning and execution. Our
system exploits effective reconstruction and recognition techniques
that require no a-priori information about the environment and can be
used under general conditions
Three dimensional point cloud compression and decompression using polynomials of degree one
© 2019 by the authors. The availability of cheap depth range sensors has increased the use of an enormous amount of 3D information in hand-held and head-mounted devices. This has directed a large research community to optimize point cloud storage requirements by preserving the original structure of data with an acceptable attenuation rate. Point cloud compression algorithms were developed to occupy less storage space by focusing on features such as color, texture, and geometric information. In this work, we propose a novel lossy point cloud compression and decompression algorithm that optimizes storage space requirements by preserving geometric information of the scene. Segmentation is performed by using a region growing segmentation algorithm. The points under the boundary of the surfaces are discarded that can be recovered through the polynomial equations of degree one in the decompression phase. We have compared the proposed technique with existing techniques using publicly available datasets for indoor architectural scenes. The results show that the proposed novel technique outperformed all the techniques for compression rate and RMSE within an acceptable time scale
Man-made Surface Structures from Triangulated Point Clouds
Photogrammetry aims at reconstructing shape and dimensions of objects captured with cameras, 3D laser scanners or other spatial acquisition systems. While many acquisition techniques deliver triangulated point clouds with millions of vertices within seconds, the interpretation is usually left to the user. Especially when reconstructing man-made objects, one is interested in the underlying surface structure, which is not inherently present in the data. This includes the geometric shape of the object, e.g. cubical or cylindrical, as well as corresponding surface parameters, e.g. width, height and radius. Applications are manifold and range from industrial production control to architectural on-site measurements to large-scale city models. The goal of this thesis is to automatically derive such surface structures from triangulated 3D point clouds of man-made objects. They are defined as a compound of planar or curved geometric primitives. Model knowledge about typical primitives and relations between adjacent pairs of them should affect the reconstruction positively. After formulating a parametrized model for man-made surface structures, we develop a reconstruction framework with three processing steps: During a fast pre-segmentation exploiting local surface properties we divide the given surface mesh into planar regions. Making use of a model selection scheme based on minimizing the description length, this surface segmentation is free of control parameters and automatically yields an optimal number of segments. A subsequent refinement introduces a set of planar or curved geometric primitives and hierarchically merges adjacent regions based on their joint description length. A global classification and constraint parameter estimation combines the data-driven segmentation with high-level model knowledge. Therefore, we represent the surface structure with a graphical model and formulate factors based on likelihood as well as prior knowledge about parameter distributions and class probabilities. We infer the most probable setting of surface and relation classes with belief propagation and estimate an optimal surface parametrization with constraints induced by inter-regional relations. The process is specifically designed to work on noisy data with outliers and a few exceptional freeform regions not describable with geometric primitives. It yields full 3D surface structures with watertightly connected surface primitives of different types. The performance of the proposed framework is experimentally evaluated on various data sets. On small synthetically generated meshes we analyze the accuracy of the estimated surface parameters, the sensitivity w.r.t. various properties of the input data and w.r.t. model assumptions as well as the computational complexity. Additionally we demonstrate the flexibility w.r.t. different acquisition techniques on real data sets. The proposed method turns out to be accurate, reasonably fast and little sensitive to defects in the data or imprecise model assumptions.Künstliche Oberflächenstrukturen aus triangulierten Punktwolken Ein Ziel der Photogrammetrie ist die Rekonstruktion der Form und Größe von Objekten, die mit Kameras, 3D-Laserscannern und anderern räumlichen Erfassungssystemen aufgenommen wurden. Während viele Aufnahmetechniken innerhalb von Sekunden triangulierte Punktwolken mit Millionen von Punkten liefern, ist deren Interpretation gewöhnlicherweise dem Nutzer überlassen. Besonders bei der Rekonstruktion künstlicher Objekte (i.S.v. engl. man-made = „von Menschenhand gemacht“ ist man an der zugrunde liegenden Oberflächenstruktur interessiert, welche nicht inhärent in den Daten enthalten ist. Diese umfasst die geometrische Form des Objekts, z.B. quaderförmig oder zylindrisch, als auch die zugehörigen Oberflächenparameter, z.B. Breite, Höhe oder Radius. Die Anwendungen sind vielfältig und reichen von industriellen Fertigungskontrollen über architektonische Raumaufmaße bis hin zu großmaßstäbigen Stadtmodellen. Das Ziel dieser Arbeit ist es, solche Oberflächenstrukturen automatisch aus triangulierten Punktwolken von künstlichen Objekten abzuleiten. Sie sind definiert als ein Verbund ebener und gekrümmter geometrischer Primitive. Modellwissen über typische Primitive und Relationen zwischen Paaren von ihnen soll die Rekonstruktion positiv beeinflussen. Nachdem wir ein parametrisiertes Modell für künstliche Oberflächenstrukturen formuliert haben, entwickeln wir ein Rekonstruktionsverfahren mit drei Verarbeitungsschritten: Im Rahmen einer schnellen Vorsegmentierung, die lokale Oberflächeneigenschaften berücksichtigt, teilen wir die gegebene vermaschte Oberfläche in ebene Regionen. Unter Verwendung eines Schemas zur Modellauswahl, das auf der Minimierung der Beschreibungslänge beruht, ist diese Oberflächensegmentierung unabhängig von Kontrollparametern und liefert automatisch eine optimale Anzahl an Regionen. Eine anschließende Verbesserung führt eine Menge von ebenen und gekrümmten geometrischen Primitiven ein und fusioniert benachbarte Regionen hierarchisch basierend auf ihrer gemeinsamen Beschreibungslänge. Eine globale Klassifikation und bedingte Parameterschätzung verbindet die datengetriebene Segmentierung mit hochrangigem Modellwissen. Dazu stellen wir die Oberflächenstruktur in Form eines graphischen Modells dar und formulieren Faktoren basierend auf der Likelihood sowie auf apriori Wissen über die Parameterverteilungen und Klassenwahrscheinlichkeiten. Wir leiten die wahrscheinlichste Konfiguration von Flächen- und Relationsklassen mit Hilfe von Belief-Propagation ab und schätzen eine optimale Oberflächenparametrisierung mit Bedingungen, die durch die Relationen zwischen benachbarten Primitiven induziert werden. Der Prozess ist eigens für verrauschte Daten mit Ausreißern und wenigen Ausnahmeregionen konzipiert, die nicht durch geometrische Primitive beschreibbar sind. Er liefert wasserdichte 3D-Oberflächenstrukturen mit Oberflächenprimitiven verschiedener Art. Die Leistungsfähigkeit des vorgestellten Verfahrens wird an verschiedenen Datensätzen experimentell evaluiert. Auf kleinen, synthetisch generierten Oberflächen untersuchen wir die Genauigkeit der geschätzten Oberflächenparameter, die Sensitivität bzgl. verschiedener Eigenschaften der Eingangsdaten und bzgl. Modellannahmen sowie die Rechenkomplexität. Außerdem demonstrieren wir die Flexibilität bzgl. verschiedener Aufnahmetechniken anhand realer Datensätze. Das vorgestellte Rekonstruktionsverfahren erweist sich als genau, hinreichend schnell und wenig anfällig für Defekte in den Daten oder falsche Modellannahmen
User-guided free-form asset modelling
In this paper a new system for piecewise primitive surface recovery on point
clouds is presented, which allows a novice user to sketch areas of interest in
order to guide the fitting process. The algorithm is demonstrated against a
benchmark technique for autonomous surface fitting, and, contrasted against
existing literature in user guided surface recovery, with empirical evidence.
It is concluded that the system is an improvement to the current documented
literature for its visual quality when modelling objects which are composed of
piecewise primitive shapes, and, in its ability to fill large holes on occluded
surfaces using free-form input
- …