195 research outputs found
A Bayesian Approach to Manifold Topology Reconstruction
In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated
A Bayesian Approach to Manifold Topology Reconstruction
In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated
Iterative Poisson Surface Reconstruction (iPSR) for Unoriented Points
Poisson surface reconstruction (PSR) remains a popular technique for
reconstructing watertight surfaces from 3D point samples thanks to its
efficiency, simplicity, and robustness. Yet, the existing PSR method and
subsequent variants work only for oriented points. This paper intends to
validate that an improved PSR, called iPSR, can completely eliminate the
requirement of point normals and proceed in an iterative manner. In each
iteration, iPSR takes as input point samples with normals directly computed
from the surface obtained in the preceding iteration, and then generates a new
surface with better quality. Extensive quantitative evaluation confirms that
the new iPSR algorithm converges in 5-30 iterations even with randomly
initialized normals. If initialized with a simple visibility based heuristic,
iPSR can further reduce the number of iterations. We conduct comprehensive
comparisons with PSR and other powerful implicit-function based methods.
Finally, we confirm iPSR's effectiveness and scalability on the AIM@SHAPE
dataset and challenging (indoor and outdoor) scenes. Code and data for this
paper are at https://github.com/houfei0801/ipsr
A framework for hull form reverse engineering and geometry integration into numerical simulations
The thesis presents a ship hull form specific reverse engineering and CAD integration framework. The reverse engineering part proposes three alternative suitable reconstruction approaches namely curves network, direct surface fitting, and triangulated surface reconstruction. The CAD integration part includes surface healing, region identification, and domain preparation strategies which used to adapt the CAD model to downstream application requirements. In general, the developed framework bridges a point cloud and a CAD model obtained from IGES and STL file into downstream applications
Multiple 2D self organising map network for surface reconstruction of 3D unstructured data
Surface reconstruction is a challenging task in reverse engineering because it must represent the surface which is similar to the original object based on the data obtained. The data obtained are mostly in unstructured type whereby there is not enough information and incorrect surface will be obtained. Therefore, the data should be reorganised by finding the correct topology with minimum surface error. Previous studies showed that Self Organising Map (SOM) model, the conventional surface approximation approach with Non Uniform Rational B-Splines (NURBS) surfaces, and optimisation methods such as Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimisation (PSO) methods are widely implemented in solving the surface reconstruction. However, the model, approach and optimisation methods are still suffer from the unstructured data and accuracy problems. Therefore, the aims of this research are to propose Cube SOM (CSOM) model with multiple 2D SOM network in organising the unstructured surface data, and to propose optimised surface approximation approach in generating the NURBS surfaces. GA, DE and PSO methods are implemented to minimise the surface error by adjusting the NURBS control points. In order to test and validate the proposed model and approach, four primitive objects data and one medical image data are used. As to evaluate the performance of the proposed model and approach, three performance measurements have been used: Average Quantisation Error (AQE) and Number Of Vertices (NOV) for the CSOM model while surface error for the proposed optimised surface approximation approach. The accuracy of AQE for CSOM model has been improved to 64% and 66% when compared to 2D and 3D SOM respectively. The NOV for CSOM model has been reduced from 8000 to 2168 as compared to 3D SOM. The accuracy of surface error for the optimised surface approximation approach has been improved to 7% compared to the conventional approach. The proposed CSOM model and optimised surface approximation approach have successfully reconstructed surface of all five data with better performance based on three performance measurements used in the evaluation
VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS
This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining.
Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation.
First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations.
New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models
Active planning for underwater inspection and the benefit of adaptivity
We discuss the problem of inspecting an underwater structure, such as a submerged ship hull, with an autonomous underwater vehicle (AUV). Unlike a large body of prior work, we focus on planning the views of the AUV to improve the quality of the inspection, rather than maximizing the accuracy of a given data stream. We formulate the inspection planning problem as an extension to Bayesian active learning, and we show connections to recent theoretical guarantees in this area. We rigorously analyze the benefit of adaptive re-planning for such problems, and we prove that the potential benefit of adaptivity can be reduced from an exponential to a constant factor by changing the problem from cost minimization with a constraint on information gain to variance reduction with a constraint on cost. Such analysis allows the use of robust, non-adaptive planning algorithms that perform competitively with adaptive algorithms. Based on our analysis, we propose a method for constructing 3D meshes from sonar-derived point clouds, and we introduce uncertainty modeling through non-parametric Bayesian regression. Finally, we demonstrate the benefit of active inspection planning using sonar data from ship hull inspections with the Bluefin-MIT Hovering AUV.United States. Office of Naval Research (ONR Grant N00014-09-1-0700)United States. Office of Naval Research (ONR Grant N00014-07-1-00738)National Science Foundation (U.S.) (NSF grant 0831728)National Science Foundation (U.S.) (NSF grant CCR-0120778)National Science Foundation (U.S.) (NSF grant CNS-1035866
Recommended from our members
Point Cloud Data Cleaning and Refining for 3D As-Built Modeling of Built Infrastructure
Spatial sensing of built infrastructure is now a common practice within the AEC industry and results are commonly encapsulated in the form of dense point cloud data (PCD). PCD of built infrastructure might consist of millions of spatial points and it is well known that processing all these points is neither necessary nor computationally feasible. In addition, due to several reasons including hardware and/or software deficiencies, there might be several outliers that need to be removed from the PCD before further processing. As the result, cleaning and refining PCD is a paramount step in the process of spatial sensing and object-oriented modeling of built infrastructure scenes. This research work entails two parts: The first part provides an in-depth literature review on current states of practice and research on the concept of PCD cleaning. The second part presents the authors’ suggested framework for cleaning and refining PCD of built infrastructure. This prototype mainly consists of three major components: (1) removing outliers; (2) filling holes and gaps on surfaces of PCD; and (3) balancing the density of different areas of PCD based on a plane recognition approach. Several case studies are presented to demonstrate the efficiency of the proposed framework.This is the author accepted manuscript. The final version is available from ASCE via https://doi.org/10.1061/9780784479827.093#sthash.SfsxrNpd.dpu
- …