286 research outputs found

    Load-Balancing for Parallel Delaunay Triangulations

    Get PDF
    Computing the Delaunay triangulation (DT) of a given point set in RD\mathbb{R}^D is one of the fundamental operations in computational geometry. Recently, Funke and Sanders (2017) presented a divide-and-conquer DT algorithm that merges two partial triangulations by re-triangulating a small subset of their vertices - the border vertices - and combining the three triangulations efficiently via parallel hash table lookups. The input point division should therefore yield roughly equal-sized partitions for good load-balancing and also result in a small number of border vertices for fast merging. In this paper, we present a novel divide-step based on partitioning the triangulation of a small sample of the input points. In experiments on synthetic and real-world data sets, we achieve nearly perfectly balanced partitions and small border triangulations. This almost cuts running time in half compared to non-data-sensitive division schemes on inputs exhibiting an exploitable underlying structure.Comment: Short version submitted to EuroPar 201

    Conforming restricted Delaunay mesh generation for piecewise smooth complexes

    Get PDF
    A Frontal-Delaunay refinement algorithm for mesh generation in piecewise smooth domains is described. Built using a restricted Delaunay framework, this new algorithm combines a number of novel features, including: (i) an unweighted, conforming restricted Delaunay representation for domains specified as a (non-manifold) collection of piecewise smooth surface patches and curve segments, (ii) a protection strategy for domains containing curve segments that subtend sharply acute angles, and (iii) a new class of off-centre refinement rules designed to achieve high-quality point-placement along embedded curve features. Experimental comparisons show that the new Frontal-Delaunay algorithm outperforms a classical (statically weighted) restricted Delaunay-refinement technique for a number of three-dimensional benchmark problems.Comment: To appear at the 25th International Meshing Roundtabl

    An Output-sensitive Algorithm for Computing Projections of Resultant Polytopes

    Get PDF
    We develop an incremental algorithm to compute the Newton polytope of the resultant, aka resultant polytope, or its projection along a given direction. The resultant is fundamental in algebraic elimination and in implicitization of parametric hypersurfaces. Our algorithm exactly computes vertex- and halfspace-representations of the desired polytope using an oracle producing resultant vertices in a given direction. It is output-sensitive as it uses one oracle call per vertex. We overcome the bottleneck of determinantal predicates by hashing, thus accelerating execution from 1818 to 100100 times. We implement our algorithm using the experimental CGAL package {\tt triangulation}. A variant of the algorithm computes successively tighter inner and outer approximations: when these polytopes have, respectively, 90\% and 105\% of the true volume, runtime is reduced up to 2525 times. Our method computes instances of 55-, 66- or 77-dimensional polytopes with 3535K, 2323K or 500500 vertices, resp., within 22hr. Compared to tropical geometry software, ours is faster up to dimension 55 or 66, and competitive in higher dimensions

    Faster Geometric Algorithms via Dynamic Determinant Computation

    Full text link
    The computation of determinants or their signs is the core procedure in many important geometric algorithms, such as convex hull, volume and point location. As the dimension of the computation space grows, a higher percentage of the total computation time is consumed by these computations. In this paper we study the sequences of determinants that appear in geometric algorithms. The computation of a single determinant is accelerated by using the information from the previous computations in that sequence. We propose two dynamic determinant algorithms with quadratic arithmetic complexity when employed in convex hull and volume computations, and with linear arithmetic complexity when used in point location problems. We implement the proposed algorithms and perform an extensive experimental analysis. On one hand, our analysis serves as a performance study of state-of-the-art determinant algorithms and implementations. On the other hand, we demonstrate the supremacy of our methods over state-of-the-art implementations of determinant and geometric algorithms. Our experimental results include a 20 and 78 times speed-up in volume and point location computations in dimension 6 and 11 respectively.Comment: 29 pages, 8 figures, 3 table

    Novel approaches for constructing persistent Delaunay triangulations by applying different equations and different methods

    Get PDF
    “Delaunay triangulation and data structures are an essential field of study and research in computer science, for this reason, the correct choices, and an adequate design are essential for the development of algorithms for the efficient storage and/or retrieval of information. However, most structures are usually ephemeral, which means keeping all versions, in different copies, of the same data structure is expensive. The problem arises of developing data structures that are capable of maintaining different versions of themselves, minimizing the cost of memory, and keeping the performance of operations as close as possible to the original structure. Therefore, this research aims to aims to examine the feasibility concepts of Spatio-temporal structures such as persistence, to design a Delaunay triangulation algorithm so that it is possible to make queries and modifications at a certain time t, minimizing spatial and temporal complexity. Four new persistent data structures for Delaunay triangulation (Bowyer-Watson, Walk, Hybrid, and Graph) were proposed and developed. The results of using random images and vertex databases with different data (DAG and CGAL), proved that the data structure in its partial version is better than the other data structures that do not have persistence. Also, the full version data structures show an advance in the state of the technique. All the results will allow the algorithms to minimize the cost of memory”--Abstract, page iii

    Load-Balancing for Parallel Delaunay Triangulations

    Get PDF
    corecore