5 research outputs found

    Self-improving Algorithms for Coordinate-wise Maxima

    Full text link
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have nn (unknown) independent distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset (p1,p2,...,pn)(p_1, p_2, ..., p_n) is generated by taking an independent sample pip_i from each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A self-improving algorithm repeatedly gets input sets from the distribution \cD (which is \emph{a priori} unknown) and tries to optimize its running time for \cD. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution \cD. Let \OPT_\cD denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution \cD. Our algorithm eventually has an expected running time of O(\text{OPT}_\cD + n), even though it did not know \cD to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2 figures

    Self-improving Algorithms for Convex Hulls

    Full text link

    Fast Computation of Output-Sensitive Maxima in a Word RAM

    Full text link
    In this paper, we study the problem of computing the maxima of a set of n points in three dimensions with integer coordinates and show that in a word RAM, the maxima can be found in O n log logn/h n deterministic time in which h is the output size. For h = n1−α this is O(n log(1/α)). This improves the previous O(n log log h) time algorithm and can be considered surprising since it gives a linear time algorithm when α> 0 is a constant, which is faster than the current best deterministic and randomized integer sorting algorithms. We observe that improving this running time is most likely difficult since it requires breaking a number of important barriers, even if randomization is allowed. Additionally, we show that the same deterministic running time could be achieved for performing n point location queries in an arrangement of size h. Finally, our maxima result can be extended to higher dimensions by paying a logn/h n factor penalty per dimension. This has further interesting consequences for example it preserves the linear running time when h ≤ n1−α, for a constant α> 0, and thus it shows that for a variety of input distributions the maxima can be computed in linear expected time without knowing the distribution.

    Self-improving Algorithms for Coordinate-wise Maxima [Extended Abstract]

    No full text
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the selfimproving setting. We have n (unknown) independent distributions D1, D2,..., Dn of planar points. An input pointset (p1, p2,..., pn) is generated by taking an independent sample pi from each Di, so the input distribution D is the product i Di. A self-improving algorithm repeatedly gets input sets from the distribution D (which is a priori unknown) and tries to optimize its running time for D. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution D. Let OPTD denote the expected depth of an optimal linear comparison tree computing the maxima for distribution D. Our algorithm eventually has an expected running time of O(OPTD + n), even though it did not know D to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution D
    corecore