193 research outputs found
Delaunay triangulation of imprecise points in linear time after preprocessing
An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of disjoint unit disks in the plane in time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay
Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing
Motivated by the desire to cope with data imprecision, we study methods for
taking advantage of preliminary information about point sets in order to speed
up the computation of certain structures associated with them.
In particular, we study the following problem: given a set L of n lines in
the plane, we wish to preprocess L such that later, upon receiving a set P of n
points, each of which lies on a distinct line of L, we can construct the convex
hull of P efficiently. We show that in quadratic time and space it is possible
to construct a data structure on L that enables us to compute the convex hull
of any such point set P in O(n alpha(n) log* n) expected time. If we further
assume that the points are "oblivious" with respect to the data structure, the
running time improves to O(n alpha(n)). The analysis applies almost verbatim
when L is a set of line-segments, and yields similar asymptotic bounds. We
present several extensions, including a trade-off between space and query time
and an output-sensitive algorithm. We also study the "dual problem" where we
show how to efficiently compute the (<= k)-level of n lines in the plane, each
of which lies on a distinct point (given in advance).
We complement our results by Omega(n log n) lower bounds under the algebraic
computation tree model for several related problems, including sorting a set of
points (according to, say, their x-order), each of which lies on a given line
known in advance. Therefore, the convex hull problem under our setting is
easier than sorting, contrary to the "standard" convex hull and sorting
problems, in which the two problems require Theta(n log n) steps in the worst
case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at
SoCG 201
Unions of Onions: Preprocessing Imprecise Points for Fast Onion Decomposition
Let be a set of pairwise disjoint unit disks in the plane.
We describe how to build a data structure for so that for any
point set containing exactly one point from each disk, we can quickly find
the onion decomposition (convex layers) of .
Our data structure can be built in time and has linear size.
Given , we can find its onion decomposition in time, where
is the number of layers. We also provide a matching lower bound. Our solution
is based on a recursive space decomposition, combined with a fast algorithm to
compute the union of two disjoint onionComment: 10 pages, 5 figures; a preliminary version appeared at WADS 201
Preprocessing Imprecise Points for Delaunay Triangulation: Simplified and Extended
Suppose we want to compute the Delaunay triangulation of a set P whose points are restricted to a collection R of input regions known in advance. Building on recent work by Löffler and Snoeyink, we show how to leverage our knowledge of R for faster Delaunay computation. Our approach needs no fancy machinery and optimally handles a wide variety of inputs, e.g., overlapping disks of different sizes and fat regions. Keywords: Delaunay triangulation - Data imprecision - Quadtree
Triangulating the Square and Squaring the Triangle: Quadtrees and Delaunay Triangulations are Equivalent
We show that Delaunay triangulations and compressed quadtrees are equivalent
structures. More precisely, we give two algorithms: the first computes a
compressed quadtree for a planar point set, given the Delaunay triangulation;
the second finds the Delaunay triangulation, given a compressed quadtree. Both
algorithms run in deterministic linear time on a pointer machine. Our work
builds on and extends previous results by Krznaric and Levcopolous and Buchin
and Mulzer. Our main tool for the second algorithm is the well-separated pair
decomposition(WSPD), a structure that has been used previously to find
Euclidean minimum spanning trees in higher dimensions (Eppstein). We show that
knowing the WSPD (and a quadtree) suffices to compute a planar Euclidean
minimum spanning tree (EMST) in linear time. With the EMST at hand, we can find
the Delaunay triangulation in linear time.
As a corollary, we obtain deterministic versions of many previous algorithms
related to Delaunay triangulations, such as splitting planar Delaunay
triangulations, preprocessing imprecise points for faster Delaunay computation,
and transdichotomous Delaunay triangulations.Comment: 37 pages, 13 figures, full version of a paper that appeared in SODA
201
Self-improving Algorithms for Coordinate-wise Maxima
Computing the coordinate-wise maxima of a planar point set is a classic and
well-studied problem in computational geometry. We give an algorithm for this
problem in the \emph{self-improving setting}. We have (unknown) independent
distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset
is generated by taking an independent sample from
each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A
self-improving algorithm repeatedly gets input sets from the distribution \cD
(which is \emph{a priori} unknown) and tries to optimize its running time for
\cD. Our algorithm uses the first few inputs to learn salient features of the
distribution, and then becomes an optimal algorithm for distribution \cD. Let
\OPT_\cD denote the expected depth of an \emph{optimal} linear comparison
tree computing the maxima for distribution \cD. Our algorithm eventually has
an expected running time of O(\text{OPT}_\cD + n), even though it did not
know \cD to begin with.
Our result requires new tools to understand linear comparison trees for
computing maxima. We show how to convert general linear comparison trees to
very restricted versions, which can then be related to the running time of our
algorithm. An interesting feature of our algorithm is an interleaved search,
where the algorithm tries to determine the likeliest point to be maximal with
minimal computation. This allows the running time to be truly optimal for the
distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2
figures
Complexity and Algorithms for the Discrete Fr\'echet Distance Upper Bound with Imprecise Input
We study the problem of computing the upper bound of the discrete Fr\'{e}chet
distance for imprecise input, and prove that the problem is NP-hard. This
solves an open problem posed in 2010 by Ahn \emph{et al}. If shortcuts are
allowed, we show that the upper bound of the discrete Fr\'{e}chet distance with
shortcuts for imprecise input can be computed in polynomial time and we present
several efficient algorithms.Comment: 15 pages, 8 figure
09111 Abstracts Collection -- Computational Geometry
From March 8 to March 13, 2009, the Dagstuhl Seminar 09111 ``Computational Geometry \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …