213 research outputs found

    QuickCSG: Fast Arbitrary Boolean Combinations of N Solids

    Get PDF
    QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection

    Robust segmentation in laser scanning 3D point cloud data

    Get PDF
    Segmentation is a most important intermediate step in point cloud data processing and understanding. Covariance statistics based local saliency features from Principal Component Analysis (PCA) are frequently used for point cloud segmentation. However it is well known that PCA is sensitive to outliers. Hence segmentation results can be erroneous and unreliable. The problems of surface segmentation in laser scanning point cloud data are investigated in this paper. We propose a region growing based statistically robust segmentation algorithm that uses a recently introduced fast Minimum Covariance Determinant (MCD) based robust PCA approach. Experiments for several real laser scanning datasets show that PCA gives unreliable and non-robust results whereas the proposed robust PCA based method has intrinsic ability to deal with noisy data and gives more accurate and robust results for planar and non planar smooth surface segmentation

    An evolutionary approach to the extraction of object construction trees from 3D point clouds

    Get PDF
    In order to extract a construction tree from a finite set of points sampled on the surface of an object, we present an evolutionary algorithm that evolves set-theoretic expressions made of primitives fitted to the input point-set and modeling operations. To keep relatively simple trees, we use a penalty term in the objective function optimized by the evolutionary algorithm. We show with experiments successes but also limitations of this approach

    QuickCSG: Fast Arbitrary Boolean Combinations of N Solids

    Full text link
    QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection

    Plane-extraction from depth-data using a Gaussian mixture regression model

    Get PDF
    We propose a novel algorithm for unsupervised extraction of piecewise planar models from depth-data. Among other applications, such models are a good way of enabling autonomous agents (robots, cars, drones, etc.) to effectively perceive their surroundings and to navigate in three dimensions. We propose to do this by fitting the data with a piecewise-linear Gaussian mixture regression model whose components are skewed over planes, making them flat in appearance rather than being ellipsoidal, by embedding an outlier-trimming process that is formally incorporated into the proposed expectation-maximization algorithm, and by selectively fusing contiguous, coplanar components. Part of our motivation is an attempt to estimate more accurate plane-extraction by allowing each model component to make use of all available data through probabilistic clustering. The algorithm is thoroughly evaluated against a standard benchmark and is shown to rank among the best of the existing state-of-the-art methods.Comment: 11 pages, 2 figures, 1 tabl

    A Review on Shape Engineering and Design Parameterization in Reverse Engineering

    Get PDF

    A Survey of Methods for Converting Unstructured Data to CSG Models

    Full text link
    The goal of this document is to survey existing methods for recovering CSG representations from unstructured data such as 3D point-clouds or polygon meshes. We review and discuss related topics such as the segmentation and fitting of the input data. We cover techniques from solid modeling and CAD for polyhedron to CSG and B-rep to CSG conversion. We look at approaches coming from program synthesis, evolutionary techniques (such as genetic programming or genetic algorithm), and deep learning methods. Finally, we conclude with a discussion of techniques for the generation of computer programs representing solids (not just CSG models) and higher-level representations (such as, for example, the ones based on sketch and extrusion or feature based operations).Comment: 29 page

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Automated segmentation, detection and fitting of piping elements from terrestrial LIDAR data

    Get PDF
    Since the invention of light detection and ranging (LIDAR) in the early 1960s, it has been adopted for use in numerous applications, from topographical mapping with airborne LIDAR platforms to surveying of urban sites with terrestrial LIDAR systems. Static terrestrial LIDAR has become an especially effective tool for surveying, in some cases replacing traditional techniques such as electronic total stations and GPS methods. Current state-of-the-art LIDAR scanners have very fine spatial resolution, generating precise 3D point cloud data with millimeter accuracy. Therefore, LIDAR data can provide 3D details of a scene with an unprecedented level of details. However, automated exploitation of LIDAR data is challenging, due to the non-uniform spatial sampling of the point clouds as well as to the massive volumes of data, which may range from a few million points to hundreds of millions of points depending on the size and complexity of the scene being scanned. ^ This dissertation focuses on addressing these challenges to automatically exploit large LIDAR point clouds of piping systems in industrial sites, such as chemical plants, oil refineries, and steel mills. A complete processing chain is proposed in this work, using raw LIDAR point clouds as input and generating cylinder parameter estimates for pipe segments as the output, which could then be used to produce computer aided design (CAD) models of pipes. The processing chain consists of three stages: (1) segmentation of LIDAR point clouds, (2) detection and identification of piping elements, and (3) cylinder fitting and parameter estimation. The final output of the cylinder fitting stage gives the estimated orientation, position, and radius of each detected pipe element. ^ A robust octree-based split and merge segmentation algorithm is proposed in this dissertation that can efficiently process LIDAR data. Following octree decomposition of the point cloud, graph theory analysis is used during the splitting process to separate points within each octant into components based on spatial connectivity. A series of connectivity criteria (proximity, orientation, and curvature) are developed for the merging process, which exploits contextual information to effectively merge cylindrical segments into complete pipes and planar segments into complete walls. Furthermore, by conducting surface fitting of segments and analyzing their principal curvatures, the proposed segmentation approach is capable of detecting and identifying the piping segments. ^ A novel cylinder fitting technique is proposed to accurately estimate the cylinder parameters for each detected piping segment from the terrestrial LIDAR point cloud. Specifically, the orientation, radius, and position of each piping element must be robustly estimated in the presence of noise. An original formulation has been developed to estimate the cylinder axis orientation using gradient descent optimization of an angular distance cost function. The cost function is based on the concept that surface normals of points in a cylinder point cloud are perpendicular to the cylinder axis. The key contribution of this algorithm is its capability to accurately estimate the cylinder orientation in the presence of noise without requiring a good initial starting point. After estimation of the cylinder\u27s axis orientation, the radius and position are then estimated in the 2D space formed from the projection of the 3D cylinder point cloud onto the plane perpendicular to the cylinder\u27s axis. With these high quality approximations, a least squares estimation in 3D is made for the final cylinder parameters. ^ Following cylinder fitting, the estimated parameters of each detected piping segment are used to generate a CAD model of the piping system. The algorithms and techniques in this dissertation form a complete processing chain that can automatically exploit large LIDAR point cloud of piping systems and generate CAD models

    Least Squares Fitting of Analytic Primitives on a GPU

    Get PDF
    Metrology systems take coordinate information directly from the surface of a manufactured part and generate millions of (X, Y, Z) data points. The inspection process often involves fitting analytic primitives such as sphere, cone, torus, cylinder and plane to these points which represent an object with the corresponding shape. Typically, a least squares fit of the parameters of the shape to the point set is performed. The least squares fit attempts to minimize the sum of the squares of the distances between the points and the primitive. The objective function however, cannot be solved in the closed form and numerical minimization techniques are required to obtain the solution. These techniques as applied to primitive fitting entail iteratively solving large systems of linear equations generally involving large floating point numbers until the solution has converged. The current problem in-process metrology faces is the large computational times for the analysis of these millions of streaming data points. This research addresses the bottleneck using the Graphical Processing Unit (GPU), primarily developed by the computer gaming industry, to optimize operations. The explosive growth in the programming capabilities and raw processing power of Graphical Processing Units has opened up new avenues for their use in non-graphic applications. The combination of large stream of data and the need for 3D vector operations make the primitive shape fit algorithms excellent candidates for processing via a GPU. The work presented in this research investigates the use of the parallel processing capabilities of the GPU in expediting specific computations involved in the fitting procedure. The least squares fit algorithms for the circle, sphere, cylinder, plane, cone and torus have been implemented on the GPU using NVIDIA\u27s Compute Unified Device Architecture (CUDA). The implementations are benchmarked against those on a CPU which are carried out using C++. The Gauss Newton minimization algorithm is used to obtain the best fit parameters for each of the aforementioned primitives. The computation times for the two implementations are compared. It is demonstrated that the GPU is about 3-4 times faster than the CPU for a relatively simple geometry such as the circle while the factor scales to about 14 for a torus which is more complex
    corecore