14 research outputs found

    Unions of Onions: Preprocessing Imprecise Points for Fast Onion Decomposition

    Full text link
    Let D\mathcal{D} be a set of nn pairwise disjoint unit disks in the plane. We describe how to build a data structure for D\mathcal{D} so that for any point set PP containing exactly one point from each disk, we can quickly find the onion decomposition (convex layers) of PP. Our data structure can be built in O(nlogn)O(n \log n) time and has linear size. Given PP, we can find its onion decomposition in O(nlogk)O(n \log k) time, where kk is the number of layers. We also provide a matching lower bound. Our solution is based on a recursive space decomposition, combined with a fast algorithm to compute the union of two disjoint onionComment: 10 pages, 5 figures; a preliminary version appeared at WADS 201

    Halving Balls in Deterministic Linear Time

    Full text link
    Let \D be a set of nn pairwise disjoint unit balls in Rd\R^d and PP the set of their center points. A hyperplane \Hy is an \emph{mm-separator} for \D if each closed halfspace bounded by \Hy contains at least mm points from PP. This generalizes the notion of halving hyperplanes, which correspond to n/2n/2-separators. The analogous notion for point sets has been well studied. Separators have various applications, for instance, in divide-and-conquer schemes. In such a scheme any ball that is intersected by the separating hyperplane may still interact with both sides of the partition. Therefore it is desirable that the separating hyperplane intersects a small number of balls only. We present three deterministic algorithms to bisect or approximately bisect a given set of disjoint unit balls by a hyperplane: Firstly, we present a simple linear-time algorithm to construct an αn\alpha n-separator for balls in Rd\R^d, for any 0<α<1/20<\alpha<1/2, that intersects at most cn(d1)/dcn^{(d-1)/d} balls, for some constant cc that depends on dd and α\alpha. The number of intersected balls is best possible up to the constant cc. Secondly, we present a near-linear time algorithm to construct an (n/2o(n))(n/2-o(n))-separator in Rd\R^d that intersects o(n)o(n) balls. Finally, we give a linear-time algorithm to construct a halving line in R2\R^2 that intersects O(n(5/6)+ϵ)O(n^{(5/6)+\epsilon}) disks. Our results improve the runtime of a disk sliding algorithm by Bereg, Dumitrescu and Pach. In addition, our results improve and derandomize an algorithm to construct a space decomposition used by L{\"o}ffler and Mulzer to construct an onion (convex layer) decomposition for imprecise points (any point resides at an unknown location within a given disk)

    Self-improving Algorithms for Coordinate-wise Maxima

    Full text link
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have nn (unknown) independent distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset (p1,p2,...,pn)(p_1, p_2, ..., p_n) is generated by taking an independent sample pip_i from each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A self-improving algorithm repeatedly gets input sets from the distribution \cD (which is \emph{a priori} unknown) and tries to optimize its running time for \cD. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution \cD. Let \OPT_\cD denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution \cD. Our algorithm eventually has an expected running time of O(\text{OPT}_\cD + n), even though it did not know \cD to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2 figures

    Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing

    Full text link
    Motivated by the desire to cope with data imprecision, we study methods for taking advantage of preliminary information about point sets in order to speed up the computation of certain structures associated with them. In particular, we study the following problem: given a set L of n lines in the plane, we wish to preprocess L such that later, upon receiving a set P of n points, each of which lies on a distinct line of L, we can construct the convex hull of P efficiently. We show that in quadratic time and space it is possible to construct a data structure on L that enables us to compute the convex hull of any such point set P in O(n alpha(n) log* n) expected time. If we further assume that the points are "oblivious" with respect to the data structure, the running time improves to O(n alpha(n)). The analysis applies almost verbatim when L is a set of line-segments, and yields similar asymptotic bounds. We present several extensions, including a trade-off between space and query time and an output-sensitive algorithm. We also study the "dual problem" where we show how to efficiently compute the (<= k)-level of n lines in the plane, each of which lies on a distinct point (given in advance). We complement our results by Omega(n log n) lower bounds under the algebraic computation tree model for several related problems, including sorting a set of points (according to, say, their x-order), each of which lies on a given line known in advance. Therefore, the convex hull problem under our setting is easier than sorting, contrary to the "standard" convex hull and sorting problems, in which the two problems require Theta(n log n) steps in the worst case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at SoCG 201

    Preprocessing Imprecise Points for Delaunay Triangulation: Simplified and Extended

    Get PDF
    Suppose we want to compute the Delaunay triangulation of a set P whose points are restricted to a collection R of input regions known in advance. Building on recent work by Löffler and Snoeyink, we show how to leverage our knowledge of R for faster Delaunay computation. Our approach needs no fancy machinery and optimally handles a wide variety of inputs, e.g., overlapping disks of different sizes and fat regions. Keywords: Delaunay triangulation - Data imprecision - Quadtree

    Delaunay triangulation of imprecise points in linear time after preprocessing

    Get PDF
    An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of disjoint unit disks in the plane in time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay

    Geometric Computations on Indecisive Points

    Full text link
    corecore