514 research outputs found
Convex Hulls: Complexity and Applications (a Survey)
Computational geometry is, in brief, the study of algorithms for geometric problems. Classical study of geometry and geometric objects, however, is not well-suited to efficient algorithms techniques. Thus, for the given geometric problems, it becomes necessary to identify properties and concepts that lend themselves to efficient computation. The primary focus of this paper will be on one such geometric problems, the Convex Hull problem
Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing
Motivated by the desire to cope with data imprecision, we study methods for
taking advantage of preliminary information about point sets in order to speed
up the computation of certain structures associated with them.
In particular, we study the following problem: given a set L of n lines in
the plane, we wish to preprocess L such that later, upon receiving a set P of n
points, each of which lies on a distinct line of L, we can construct the convex
hull of P efficiently. We show that in quadratic time and space it is possible
to construct a data structure on L that enables us to compute the convex hull
of any such point set P in O(n alpha(n) log* n) expected time. If we further
assume that the points are "oblivious" with respect to the data structure, the
running time improves to O(n alpha(n)). The analysis applies almost verbatim
when L is a set of line-segments, and yields similar asymptotic bounds. We
present several extensions, including a trade-off between space and query time
and an output-sensitive algorithm. We also study the "dual problem" where we
show how to efficiently compute the (<= k)-level of n lines in the plane, each
of which lies on a distinct point (given in advance).
We complement our results by Omega(n log n) lower bounds under the algebraic
computation tree model for several related problems, including sorting a set of
points (according to, say, their x-order), each of which lies on a given line
known in advance. Therefore, the convex hull problem under our setting is
easier than sorting, contrary to the "standard" convex hull and sorting
problems, in which the two problems require Theta(n log n) steps in the worst
case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at
SoCG 201
Tight bounds for some problems in computational geometry: the complete sub-logarithmic parallel time range
There are a number of fundamental problems in computational geometry for which work-optimal algorithms exist which have a parallel running time of in the PRAM model. These include problems like two and three dimensional convex-hulls, trapezoidal decomposition, arrangement construction, dominance among others. Further improvements in running time to sub-logarithmic range were not considered likely because of their close relationship to sorting for which an is known to hold even with a polynomial number of processors. However, with recent progress in padded-sort algorithms, which circumvents the conventional lower-bounds, there arises a natural question about speeding up algorithms for the above-mentioned geometric problems (with appropriate modifications in the output specification). We present randomized parallel algorithms for some fundamental problems like convex-hulls and trapezoidal decomposition which execute in time in an () processor CRCW PRAM. Our algorithms do not make any assumptions about the input distribution. Our work relies heavily on results on padded-sorting and some earlier results of Reif and Sen [28, 27]. We further prove a matching lower-bound for these problems in the bounded degree decision tree
An Optimal Algorithm for Higher-Order Voronoi Diagrams in the Plane: The Usefulness of Nondeterminism
We present the first optimal randomized algorithm for constructing the
order- Voronoi diagram of points in two dimensions. The expected running
time is , which improves the previous, two-decades-old result
of Ramos (SoCG'99) by a factor. To obtain our result, we (i)
use a recent decision-tree technique of Chan and Zheng (SODA'22) in combination
with Ramos's cutting construction, to reduce the problem to verifying an
order- Voronoi diagram, and (ii) solve the verification problem by a new
divide-and-conquer algorithm using planar-graph separators.
We also describe a deterministic algorithm for constructing the -level of
lines in two dimensions in time, and constructing
the -level of planes in three dimensions in
time. These time bounds (ignoring the term) match the current best
upper bounds on the combinatorial complexity of the -level. Previously, the
same time bound in two dimensions was obtained by Chan (1999) but with
randomization.Comment: To appear in SODA 2024. 16 pages, 1 figur
Theta Bodies for Polynomial Ideals
Inspired by a question of Lov\'asz, we introduce a hierarchy of nested
semidefinite relaxations of the convex hull of real solutions to an arbitrary
polynomial ideal, called theta bodies of the ideal. For the stable set problem
in a graph, the first theta body in this hierarchy is exactly Lov\'asz's theta
body of the graph. We prove that theta bodies are, up to closure, a version of
Lasserre's relaxations for real solutions to ideals, and that they can be
computed explicitly using combinatorial moment matrices. Theta bodies provide a
new canonical set of semidefinite relaxations for the max cut problem. For
vanishing ideals of finite point sets, we give several equivalent
characterizations of when the first theta body equals the convex hull of the
points. We also determine the structure of the first theta body for all ideals.Comment: 26 pages, 3 figure
The Convex Hull Problem in Practice : Improving the Running Time of the Double Description Method
The double description method is a simple but widely used algorithm for computation of extreme points in polyhedral sets. One key aspect of its implementation is the question of how to efficiently test extreme points for adjacency. In this dissertation, two significant contributions related to adjacency testing are presented. First, the currently used data structures are revisited and various optimizations are proposed. Empirical evidence is provided to demonstrate their competitiveness. Second, a new adjacency test is introduced. It is a refinement of the well known algebraic test featuring a technique for avoiding redundant computations. Its correctness is formally proven. Its superiority in multiple degenerate scenarios is demonstrated through experimental results. Parallel computation is one further aspect of the double description method covered in this work. A recently introduced divide-and-conquer technique is revisited and considerable practical limitations are demonstrated
- …