30 research outputs found

    Constructing Delaunay triangulations along space-filling curves

    Get PDF
    Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations. So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations. In this paper, we analyze its running time for the more typical case that points are located by walking. We show that in the worst-case the algorithm needs quadratic time, but that this can only happen in degenerate cases. We show that the algorithm runs in O(n logn) time under realistic assumptions. Furthermore, we show that it runs in expected linear time for many random point distributions. This research was supported by the Deutsche Forschungsgemeinschaft within the European graduate program ’Combinatorics, Geometry, and Computation’ (No. GRK 588/2) and by the Netherlands’ Organisation for Scientific Research (NWO) under BRICKS/FOCUS grant number 642.065.503 and project no. 639.022.707

    Delaunay triangulations on the word RAM: towards a practical worst-case optimal algorithm

    Get PDF
    The Delaunay triangulation of n points in the plane can be constructed in o(n log n) time when the coordinates of the points are integers from a restricted range. However, algorithms that are known to achieve such running times had not been implemented so far. We explore ways to obtain a practical algorithm for Delaunay triangulations in the plane that runs in linear time for small integers. For this, we first implement and evaluate variants of an algorithm, BrioDC, that is known to achieve this bound. We find that our implementations of these algorithms are competitive with fast existing algorithms. Secondly, we implement and evaluate variants of an algorithm, BRIO, that runs fast in experiments. Our variants aim to avoid bad worst-case behavior and our squarified orders indeed provide faster point location

    Optimal randomized incremental construction for guaranteed logarithmic planar point location

    Full text link
    Given a planar map of nn segments in which we wish to efficiently locate points, we present the first randomized incremental construction of the well-known trapezoidal-map search-structure that only requires expected O(nlog⁥n)O(n \log n) preprocessing time while deterministically guaranteeing worst-case linear storage space and worst-case logarithmic query time. This settles a long standing open problem; the best previously known construction time of such a structure, which is based on a directed acyclic graph, so-called the history DAG, and with the above worst-case space and query-time guarantees, was expected O(nlog⁥2n)O(n \log^2 n). The result is based on a deeper understanding of the structure of the history DAG, its depth in relation to the length of its longest search path, as well as its correspondence to the trapezoidal search tree. Our results immediately extend to planar maps induced by finite collections of pairwise interior disjoint well-behaved curves.Comment: The article significantly extends the theoretical aspects of the work presented in http://arxiv.org/abs/1205.543

    A numerical algorithm for L2L_2 semi-discrete optimal transport in 3D

    Get PDF
    This paper introduces a numerical algorithm to compute the L2L_2 optimal transport map between two measures Ό\mu and Μ\nu, where Ό\mu derives from a density ρ\rho defined as a piecewise linear function (supported by a tetrahedral mesh), and where Μ\nu is a sum of Dirac masses. I first give an elementary presentation of some known results on optimal transport and then observe a relation with another problem (optimal sampling). This relation gives simple arguments to study the objective functions that characterize both problems. I then propose a practical algorithm to compute the optimal transport map between a piecewise linear density and a sum of Dirac masses in 3D. In this semi-discrete setting, Aurenhammer et.al [\emph{8th Symposium on Computational Geometry conf. proc.}, ACM (1992)] showed that the optimal transport map is determined by the weights of a power diagram. The optimal weights are computed by minimizing a convex objective function with a quasi-Newton method. To evaluate the value and gradient of this objective function, I propose an efficient and robust algorithm, that computes at each iteration the intersection between a power diagram and the tetrahedral mesh that defines the measure Ό\mu. The numerical algorithm is experimented and evaluated on several datasets, with up to hundred thousands tetrahedra and one million Dirac masses.Comment: 23 pages, 14 figure

    Improved Implementation of Point Location in General Two-Dimensional Subdivisions

    Full text link
    We present a major revamp of the point-location data structure for general two-dimensional subdivisions via randomized incremental construction, implemented in CGAL, the Computational Geometry Algorithms Library. We can now guarantee that the constructed directed acyclic graph G is of linear size and provides logarithmic query time. Via the construction of the Voronoi diagram for a given point set S of size n, this also enables nearest-neighbor queries in guaranteed O(log n) time. Another major innovation is the support of general unbounded subdivisions as well as subdivisions of two-dimensional parametric surfaces such as spheres, tori, cylinders. The implementation is exact, complete, and general, i.e., it can also handle non-linear subdivisions. Like the previous version, the data structure supports modifications of the subdivision, such as insertions and deletions of edges, after the initial preprocessing. A major challenge is to retain the expected O(n log n) preprocessing time while providing the above (deterministic) space and query-time guarantees. We describe an efficient preprocessing algorithm, which explicitly verifies the length L of the longest query path in O(n log n) time. However, instead of using L, our implementation is based on the depth D of G. Although we prove that the worst case ratio of D and L is Theta(n/log n), we conjecture, based on our experimental results, that this solution achieves expected O(n log n) preprocessing time.Comment: 21 page

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    La triangulation de Delaunay d'un échantillon aléatoire d'un bon échantillon a une taille linéaire

    Get PDF
    A good sample is a point set such that any ball of radius Ï”\epsilon contains a constant number of points. The Delaunay triangulation of a good sample is proved to have linear size, unfortunately this is not enough to ensure a good time complexity of the randomized incremental construction of the Delaunay triangulation. In this paper we prove that a random Bernoulli sample of a good sample has a triangulation of linear size. This result allows to prove that the randomized incremental construction needs an expected linear size and an expected O(nlog⁥n)O(n\log n) time.Un bon Ă©chantillon est un ensemble de points tel que toute boule de rayon Ï”\epsilon contienne un nombre constant de points.Il est dĂ©montrĂ© que la triangulation de Delaunay d'un bon Ă©chantillon a une taille linĂ©aire, malheureusement cela ne suffit pas Ă  assurerune bonne complexitĂ© Ă  la construction incrĂ©mentale randomisĂ©e de latriangulation de Delaunay.Dans ce rapport, nous dĂ©montrons que la triangulation d'un Ă©chantillon alĂ©atoire de Bernoullid'un bon Ă©chantillon a une taille linĂ©aire. Nous en dĂ©duisonsque la construction incrĂ©mentale randomisĂ©e peut ĂȘtre faite en tempsO(nlog⁥n)O(n \log n) et espace O(n)O(n)

    Delaunay triangulation and randomized constructions

    Get PDF
    International audienceThe Delaunay triangulation and the Voronoi diagram are two classic geometric structures in the field of computational geometry. Their success can perhaps be attributed to two main reasons: Firstly, there exist practical, efficient algorithms to construct them; and secondly, they have an enormous number of useful applications ranging from meshing and 3D-reconstruction to interpolation
    corecore