469 research outputs found

    Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing

    Full text link
    Motivated by the desire to cope with data imprecision, we study methods for taking advantage of preliminary information about point sets in order to speed up the computation of certain structures associated with them. In particular, we study the following problem: given a set L of n lines in the plane, we wish to preprocess L such that later, upon receiving a set P of n points, each of which lies on a distinct line of L, we can construct the convex hull of P efficiently. We show that in quadratic time and space it is possible to construct a data structure on L that enables us to compute the convex hull of any such point set P in O(n alpha(n) log* n) expected time. If we further assume that the points are "oblivious" with respect to the data structure, the running time improves to O(n alpha(n)). The analysis applies almost verbatim when L is a set of line-segments, and yields similar asymptotic bounds. We present several extensions, including a trade-off between space and query time and an output-sensitive algorithm. We also study the "dual problem" where we show how to efficiently compute the (<= k)-level of n lines in the plane, each of which lies on a distinct point (given in advance). We complement our results by Omega(n log n) lower bounds under the algebraic computation tree model for several related problems, including sorting a set of points (according to, say, their x-order), each of which lies on a given line known in advance. Therefore, the convex hull problem under our setting is easier than sorting, contrary to the "standard" convex hull and sorting problems, in which the two problems require Theta(n log n) steps in the worst case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at SoCG 201

    Unions of Onions: Preprocessing Imprecise Points for Fast Onion Decomposition

    Full text link
    Let D\mathcal{D} be a set of nn pairwise disjoint unit disks in the plane. We describe how to build a data structure for D\mathcal{D} so that for any point set PP containing exactly one point from each disk, we can quickly find the onion decomposition (convex layers) of PP. Our data structure can be built in O(nlogn)O(n \log n) time and has linear size. Given PP, we can find its onion decomposition in O(nlogk)O(n \log k) time, where kk is the number of layers. We also provide a matching lower bound. Our solution is based on a recursive space decomposition, combined with a fast algorithm to compute the union of two disjoint onionComment: 10 pages, 5 figures; a preliminary version appeared at WADS 201

    Self-improving Algorithms for Coordinate-wise Maxima

    Full text link
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have nn (unknown) independent distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset (p1,p2,...,pn)(p_1, p_2, ..., p_n) is generated by taking an independent sample pip_i from each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A self-improving algorithm repeatedly gets input sets from the distribution \cD (which is \emph{a priori} unknown) and tries to optimize its running time for \cD. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution \cD. Let \OPT_\cD denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution \cD. Our algorithm eventually has an expected running time of O(\text{OPT}_\cD + n), even though it did not know \cD to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2 figures

    On the expected diameter, width, and complexity of a stochastic convex-hull

    Full text link
    We investigate several computational problems related to the stochastic convex hull (SCH). Given a stochastic dataset consisting of nn points in Rd\mathbb{R}^d each of which has an existence probability, a SCH refers to the convex hull of a realization of the dataset, i.e., a random sample including each point with its existence probability. We are interested in computing certain expected statistics of a SCH, including diameter, width, and combinatorial complexity. For diameter, we establish the first deterministic 1.633-approximation algorithm with a time complexity polynomial in both nn and dd. For width, two approximation algorithms are provided: a deterministic O(1)O(1)-approximation running in O(nd+1logn)O(n^{d+1} \log n) time, and a fully polynomial-time randomized approximation scheme (FPRAS). For combinatorial complexity, we propose an exact O(nd)O(n^d)-time algorithm. Our solutions exploit many geometric insights in Euclidean space, some of which might be of independent interest

    Self-improving Algorithms for Convex Hulls

    Full text link

    Uncertain Curve Simplification

    Get PDF
    We study the problem of polygonal curve simplification under uncertainty, where instead of a sequence of exact points, each uncertain point is represented by a region, which contains the (unknown) true location of the vertex. The regions we consider are disks, line segments, convex polygons, and discrete sets of points. We are interested in finding the shortest subsequence of uncertain points such that no matter what the true location of each uncertain point is, the resulting polygonal curve is a valid simplification of the original polygonal curve under the Hausdorff or the Fr\'echet distance. For both these distance measures, we present polynomial-time algorithms for this problem.Comment: 25 pages, 5 figure
    corecore