16,121 research outputs found

    Unions of Onions: Preprocessing Imprecise Points for Fast Onion Decomposition

    Full text link
    Let D\mathcal{D} be a set of nn pairwise disjoint unit disks in the plane. We describe how to build a data structure for D\mathcal{D} so that for any point set PP containing exactly one point from each disk, we can quickly find the onion decomposition (convex layers) of PP. Our data structure can be built in O(nlogn)O(n \log n) time and has linear size. Given PP, we can find its onion decomposition in O(nlogk)O(n \log k) time, where kk is the number of layers. We also provide a matching lower bound. Our solution is based on a recursive space decomposition, combined with a fast algorithm to compute the union of two disjoint onionComment: 10 pages, 5 figures; a preliminary version appeared at WADS 201

    Planar Visibility: Testing and Counting

    Full text link
    In this paper we consider query versions of visibility testing and visibility counting. Let SS be a set of nn disjoint line segments in R2\R^2 and let ss be an element of SS. Visibility testing is to preprocess SS so that we can quickly determine if ss is visible from a query point qq. Visibility counting involves preprocessing SS so that one can quickly estimate the number of segments in SS visible from a query point qq. We present several data structures for the two query problems. The structures build upon a result by O'Rourke and Suri (1984) who showed that the subset, VS(s)V_S(s), of R2\R^2 that is weakly visible from a segment ss can be represented as the union of a set, CS(s)C_S(s), of O(n2)O(n^2) triangles, even though the complexity of VS(s)V_S(s) can be Ω(n4)\Omega(n^4). We define a variant of their covering, give efficient output-sensitive algorithms for computing it, and prove additional properties needed to obtain approximation bounds. Some of our bounds rely on a new combinatorial result that relates the number of segments of SS visible from a point pp to the number of triangles in sSCS(s)\bigcup_{s\in S} C_S(s) that contain pp.Comment: 22 page

    QuickCSG: Fast Arbitrary Boolean Combinations of N Solids

    Get PDF
    QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection

    Instance and Output Optimal Parallel Algorithms for Acyclic Joins

    Full text link
    Massively parallel join algorithms have received much attention in recent years, while most prior work has focused on worst-optimal algorithms. However, the worst-case optimality of these join algorithms relies on hard instances having very large output sizes, which rarely appear in practice. A stronger notion of optimality is {\em output-optimal}, which requires an algorithm to be optimal within the class of all instances sharing the same input and output size. An even stronger optimality is {\em instance-optimal}, i.e., the algorithm is optimal on every single instance, but this may not always be achievable. In the traditional RAM model of computation, the classical Yannakakis algorithm is instance-optimal on any acyclic join. But in the massively parallel computation (MPC) model, the situation becomes much more complicated. We first show that for the class of r-hierarchical joins, instance-optimality can still be achieved in the MPC model. Then, we give a new MPC algorithm for an arbitrary acyclic join with load O ({\IN \over p} + {\sqrt{\IN \cdot \OUT} \over p}), where \IN,\OUT are the input and output sizes of the join, and pp is the number of servers in the MPC model. This improves the MPC version of the Yannakakis algorithm by an O (\sqrt{\OUT \over \IN} ) factor. Furthermore, we show that this is output-optimal when \OUT = O(p \cdot \IN), for every acyclic but non-r-hierarchical join. Finally, we give the first output-sensitive lower bound for the triangle join in the MPC model, showing that it is inherently more difficult than acyclic joins

    Self-Improving Algorithms

    Full text link
    We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.Comment: 26 pages, 8 figures, preliminary versions appeared at SODA 2006 and SoCG 2008. Thorough revision to improve the presentation of the pape
    corecore