8,581 research outputs found
Point Cloud Structural Parts Extraction based on Segmentation Energy Minimization
In this work we consider 3D point sets, which in a typical setting represent unorganized point clouds. Segmentation of these point sets requires first to single out structural components of the unknown surface discretely approximated by the point cloud. Structural components, in turn, are surface patches approximating unknown parts of elementary geometric structures, such as planes, ellipsoids, spheres and so on. The approach used is based on level set methods computing the moving front of the surface and tracing the interfaces between different parts of it. Level set methods are widely recognized to be one of the most efficient methods to segment both 2D images and 3D medical images. Level set methods for 3D segmentation have recently received an increasing interest. We contribute by proposing a novel approach for raw point sets. Based on the motion and distance functions of the level set we introduce four energy minimization models, which are used for segmentation, by considering an equal number of distance functions specified by geometric features. Finally we evaluate the proposed algorithm on point sets simulating unorganized point clouds
Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models
The approaches taken to describe and develop spatial discretisations of the
domains required for geophysical simulation models are commonly ad hoc, model
or application specific and under-documented. This is particularly acute for
simulation models that are flexible in their use of multi-scale, anisotropic,
fully unstructured meshes where a relatively large number of heterogeneous
parameters are required to constrain their full description. As a consequence,
it can be difficult to reproduce simulations, ensure a provenance in model data
handling and initialisation, and a challenge to conduct model intercomparisons
rigorously. This paper takes a novel approach to spatial discretisation,
considering it much like a numerical simulation model problem of its own. It
introduces a generalised, extensible, self-documenting approach to carefully
describe, and necessarily fully, the constraints over the heterogeneous
parameter space that determine how a domain is spatially discretised. This
additionally provides a method to accurately record these constraints, using
high-level natural language based abstractions, that enables full accounts of
provenance, sharing and distribution. Together with this description, a
generalised consistent approach to unstructured mesh generation for geophysical
models is developed, that is automated, robust and repeatable, quick-to-draft,
rigorously verified and consistent to the source data throughout. This
interprets the description above to execute a self-consistent spatial
discretisation process, which is automatically validated to expected discrete
characteristics and metrics.Comment: 18 pages, 10 figures, 1 table. Submitted for publication and under
revie
Point Cloud Framework for Rendering 3D Models Using Google Tango
This project seeks to demonstrate the feasibility of point cloud meshing for capturing and modeling three dimensional objects on consumer smart phones and tablets. Traditional methods of capturing objects require hundreds of images, are very slow and consume a large amount of cellular data for the average consumer. Software developers need a starting point for capturing and meshing point clouds to create 3D models as hardware manufacturers provide the tools to capture point cloud data. The project uses Googles Tango computer vision library for Android to capture point clouds on devices with depth-sensing hardware. The point clouds are combined and meshed as models for use in 3D rendering projects. We expect our results to be embraced by the Android market because capturing point clouds is fast and does not carry a large data footprint
Cross-calibration of Time-of-flight and Colour Cameras
Time-of-flight cameras provide depth information, which is complementary to
the photometric appearance of the scene in ordinary images. It is desirable to
merge the depth and colour information, in order to obtain a coherent scene
representation. However, the individual cameras will have different viewpoints,
resolutions and fields of view, which means that they must be mutually
calibrated. This paper presents a geometric framework for this multi-view and
multi-modal calibration problem. It is shown that three-dimensional projective
transformations can be used to align depth and parallax-based representations
of the scene, with or without Euclidean reconstruction. A new evaluation
procedure is also developed; this allows the reprojection error to be
decomposed into calibration and sensor-dependent components. The complete
approach is demonstrated on a network of three time-of-flight and six colour
cameras. The applications of such a system, to a range of automatic
scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table
- …