28 research outputs found

    Principal component and Voronoi skeleton alternatives for curve reconstruction from noisy point sets

    Get PDF
    Surface reconstruction from noisy point samples must take into consideration the stochastic nature of the sample -- In other words, geometric algorithms reconstructing the surface or curve should not insist in following in a literal way each sampled point -- Instead, they must interpret the sample as a “point cloud” and try to build the surface as passing through the best possible (in the statistical sense) geometric locus that represents the sample -- This work presents two new methods to find a Piecewise Linear approximation from a Nyquist-compliant stochastic sampling of a quasi-planar C1 curve C(u) : R → R3, whose velocity vector never vanishes -- One of the methods articulates in an entirely new way Principal Component Analysis (statistical) and Voronoi-Delaunay (deterministic) approaches -- It uses these two methods to calculate the best possible tape-shaped polygon covering the planarised point set, and then approximates the manifold by the medial axis of such a polygon -- The other method applies Principal Component Analysis to find a direct Piecewise Linear approximation of C(u) -- A complexity comparison of these two methods is presented along with a qualitative comparison with previously developed ones -- It turns out that the method solely based on Principal Component Analysis is simpler and more robust for non self-intersecting curves -- For self-intersecting curves the Voronoi-Delaunay based Medial Axis approach is more robust, at the price of higher computational complexity -- An application is presented in Integration of meshes originated in range images of an art piece -- Such an application reaches the point of complete reconstruction of a unified mes

    Hierarchic Voronoi Skeletons

    No full text
    Robust and time-efficient skeletonization of a (planar) shape, which is connectivity preserving and based on Euclidean metrics, can be achieved by first regularizing the Voronoi diagram (VD) of a shape's boundary points, i.e., by removal of noise-sensitive parts of the tessellation and then by establishing a hierarchic organization of skeleton constituents. Each component of the VD is attributed with a measure of prominence which exhibits the expected invariance under geometric transformations and noise. The second processing step, a hierarchic clustering of skeleton branches, leads to a multiresolution representation of the skeleton, termed skeleton pyramid

    Voronoi Tessellation of Points with Integer Coordinates: Time-Efficient Implementation and Online Edge-List Generation

    No full text
    The Voronoi tessellation in the plane can be computed in a particularly time-efficient manner for generators with integer coordinates, such as typically acquired from a raster image. The Voronoi tessellation is constructed line by line during a single scan of the input image, simultaneously generating an edge-list data structure (DCEL) suitable for postprocessing by graph traversal algorithms. In contrast to the generic case, it can be shown that the topology of the grid permits the algorithm to run faster on complex scenes. Consequently, in Computer Vision applications, the computation of the Voronoi tessellation represents an attractive alternative to rasterbased techniques in terms of both computational complexity and quality of data structures. Index terms ---Tessellation, Computational Geometry, Delaunay triangulation, Voronoi diagram. 1. Introduction The concept of the Voronoi diagram (also termed Voronoi or Dirichlet tessellation) refers to one of the basic closest point probl..

    Curvature Dependent Skeletonization

    No full text

    Tolerance-Based Feature Transforms

    Get PDF
    Tolerance-based feature transforms (TFTs) assign to each pixel in an image not only the nearest feature pixels on the boundary (origins), but all origins from the minimum distance up to a user-defined tolerance. In this paper, we compare four simple-to-implement methods for computing TFTs on binary images. Of these methods, the Fast Marching TFT and Euclidean TFT are new. The other two extend existing distance transform algorithms. We quantitatively and qualitatively compare all algorithms on speed and accuracy of both distance and origin results. Our analysis is aimed at helping practitioners in the field to choose the right method for given accuracy and performance constraints.
    corecore