24 research outputs found
Polygon Placement Revisited: (Degree of Freedom + 1)-SUM Hardness and an Improvement via Offline Dynamic Rectangle Union
We revisit the classical problem of determining the largest copy of a simple polygon that can be placed into a simple polygon . Despite significant effort, known algorithms require high polynomial running times. (Barequet and Har-Peled, 2001) give a lower bound of under the 3SUM conjecture when and are (convex) polygons with vertices each. This leaves open whether we can establish (1) hardness beyond quadratic time and (2) any superlinear bound for constant-sized or . In this paper, we affirmatively answer these questions under the SUM conjecture, proving natural hardness results that increase with each degree of freedom (scaling, -translation, -translation, rotation): (1) Finding the largest copy of that can be -translated into requires time under the 3SUM conjecture. (2) Finding the largest copy of that can be arbitrarily translated into requires time under the 4SUM conjecture. (3) The above lower bounds are almost tight when one of the polygons is of constant size: we obtain an -time algorithm for orthogonal polygons with and vertices, respectively. (4) Finding the largest copy of that can be arbitrarily rotated and translated into requires time under the 5SUM conjecture. We are not aware of any other such natural degree of freedom -SUM hardness for a geometric optimization problem
Approximating the Maximum Overlap of Polygons under Translation
Let and be two simple polygons in the plane of total complexity ,
each of which can be decomposed into at most convex parts. We present an
-approximation algorithm, for finding the translation of ,
which maximizes its area of overlap with . Our algorithm runs in
time, where is a constant that depends only on and .
This suggest that for polygons that are "close" to being convex, the problem
can be solved (approximately), in near linear time
Translating Hausdorff is Hard: Fine-Grained Lower Bounds for Hausdorff Distance Under Translation
Computing the similarity of two point sets is a ubiquitous task in medical
imaging, geometric shape comparison, trajectory analysis, and many more
settings. Arguably the most basic distance measure for this task is the
Hausdorff distance, which assigns to each point from one set the closest point
in the other set and then evaluates the maximum distance of any assigned pair.
A drawback is that this distance measure is not translational invariant, that
is, comparing two objects just according to their shape while disregarding
their position in space is impossible.
Fortunately, there is a canonical translational invariant version, the
Hausdorff distance under translation, which minimizes the Hausdorff distance
over all translations of one of the point sets. For point sets of size and
, the Hausdorff distance under translation can be computed in time for the and norm [Chew, Kedem SWAT'92] and for the norm [Huttenlocher, Kedem, Sharir DCG'93].
As these bounds have not been improved for over 25 years, in this paper we
approach the Hausdorff distance under translation from the perspective of
fine-grained complexity theory. We show (i) a matching lower bound of
for and (and all other norms) assuming
the Orthogonal Vectors Hypothesis and (ii) a matching lower bound of
for in the imbalanced case of assuming the 3SUM
Hypothesis
Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)
Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic
Improved Bounds for 3SUM, -SUM, and Linear Degeneracy
Given a set of real numbers, the 3SUM problem is to decide whether there
are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund
and Pettie [FOCS'14], a simple -time deterministic algorithm for
this problem was conjectured to be optimal. Over the years many algorithmic
problems have been shown to be reducible from the 3SUM problem or its variants,
including the more generalized forms of the problem, such as -SUM and
-variate linear degeneracy testing (-LDT). The conjectured hardness of
these problems have become extremely popular for basing conditional lower
bounds for numerous algorithmic problems in P.
In this paper, we show that the randomized -linear decision tree
complexity of 3SUM is , and that the randomized -linear
decision tree complexity of -SUM and -LDT is , for any odd
. These bounds improve (albeit randomized) the corresponding
and decision tree bounds
obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized
randomized variant of fractional cascading data structure. Additionally, we
give another deterministic algorithm for 3SUM that runs in time. The latter bound matches a recent independent bound by Freund
[Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use
of word-RAM model
Threesomes, Degenerates, and Love Triangles
The 3SUM problem is to decide, given a set of real numbers, whether any
three sum to zero. It is widely conjectured that a trivial -time
algorithm is optimal and over the years the consequences of this conjecture
have been revealed. This 3SUM conjecture implies lower bounds on
numerous problems in computational geometry and a variant of the conjecture
implies strong lower bounds on triangle enumeration, dynamic graph algorithms,
and string matching data structures.
In this paper we refute the 3SUM conjecture. We prove that the decision tree
complexity of 3SUM is and give two subquadratic 3SUM
algorithms, a deterministic one running in
time and a randomized one running in time with
high probability. Our results lead directly to improved bounds for -variate
linear degeneracy testing for all odd . The problem is to decide, given
a linear function and a set , whether . We show the
decision tree complexity of this problem is .
Finally, we give a subcubic algorithm for a generalization of the
-product over real-valued matrices and apply it to the problem of
finding zero-weight triangles in weighted graphs. We give a
depth- decision tree for this problem, as well as an
algorithm running in time
Solving -SUM using few linear queries
The -SUM problem is given input real numbers to determine whether any
of them sum to zero. The problem is of tremendous importance in the
emerging field of complexity theory within , and it is in particular open
whether it admits an algorithm of complexity with . Inspired by an algorithm due to Meiser (1993), we show
that there exist linear decision trees and algebraic computation trees of depth
solving -SUM. Furthermore, we show that there exists a
randomized algorithm that runs in
time, and performs linear queries on the input. Thus, we show
that it is possible to have an algorithm with a runtime almost identical (up to
the ) to the best known algorithm but for the first time also with the
number of queries on the input a polynomial that is independent of . The
bound on the number of linear queries is also a tighter bound
than any known algorithm solving -SUM, even allowing unlimited total time
outside of the queries. By simultaneously achieving few queries to the input
without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we
deepen the understanding of this canonical problem which is a cornerstone of
complexity-within-.
We also consider a range of tradeoffs between the number of terms involved in
the queries and the depth of the decision tree. In particular, we prove that
there exist -linear decision trees of depth
Geometric Optimization Problem Solving: Matching Sets of Line Segments and Multi-robot Path Planning
Department of Computer Science and EngineeringWe study two geometric optimization problems: Line segments pattern matching and multi-robot path planning. We give approximation algorithms for matching two sets of line segments in constant dimension. We consider several versions of the problem: Hausdorff distance, bottleneck distance and largest common subset. We study these similarity measures under several sets of transformations: translations in arbitrary dimension, rotations about a fixed point and rigid motions in two dimensions. As opposed to previous theoretical work on this problem, we match segments individually, in other words we regard our two input sets as sets of segments rather than unions of segments.
Then we consider a multi-robot path planning problem. A collection of square robots need to move on the integer grid, from their given starting points to their target points, and without collision between robots, or between robots and a set of input obstacles. We designed and implemented three algorithms for this problem. First, we computed a feasible solution by placing middle-points outside of the minimum bounding box of the starting positions, the target positions and the obstacles, and moving each robot from its starting point to its target point through a middle-point. Second, we applied a simple local search approach where we repeatedly delete and insert again a random robot through an optimal path. It improves the quality of the solution, as the robots no longer need to go through the middle-points. Finally, we used simulated annealing to further improve this feasible solution.ope
Fine-grained complexity and algorithm engineering of geometric similarity measures
Point sets and sequences are fundamental geometric objects that arise in any application that considers movement data, geometric shapes, and many more. A crucial task on these objects is to measure their similarity. Therefore, this thesis presents results on algorithms, complexity lower bounds, and algorithm engineering of the most important point set and sequence similarity measures like the Fréchet distance, the Fréchet distance under translation, and the Hausdorff distance under translation. As an extension to the mere computation of similarity, also the approximate near neighbor problem for the continuous Fréchet distance on time series is considered and matching upper and lower bounds are shown.Punktmengen und Sequenzen sind fundamentale geometrische Objekte, welche in vielen Anwendungen auftauchen, insbesondere in solchen die Bewegungsdaten, geometrische Formen, und ähnliche Daten verarbeiten. Ein wichtiger Bestandteil dieser Anwendungen ist die Berechnung der Ähnlichkeit von Objekten. Diese Dissertation präsentiert Resultate, genauer gesagt Algorithmen, untere Komplexitätsschranken und Algorithm Engineering der wichtigsten Ähnlichkeitsmaße für Punktmengen und Sequenzen, wie zum Beispiel Fréchetdistanz, Fréchetdistanz unter Translation und Hausdorffdistanz unter Translation. Als eine Erweiterung der bloßen Berechnung von Ähnlichkeit betrachten wir auch das Near Neighbor Problem für die kontinuierliche Fréchetdistanz auf Zeitfolgen und zeigen obere und untere Schranken dafür
A Nearly Quadratic Bound for the Decision Tree Complexity of k-SUM
We show that the k-SUM problem can be solved by a linear decision tree of depth O(n^2 log^2 n),improving the recent bound O(n^3 log^3 n) of Cardinal et al. Our bound depends linearly on k, and allows us to conclude that the number of linear queries required to decide the n-dimensional Knapsack or SubsetSum problems is only O(n^3 log n), improving the currently best known bounds by a factor of n. Our algorithm extends to the RAM model, showing that the k-SUM problem can be solved in expected polynomial time, for any fixed k, with the above bound on the number of linear queries. Our approach relies on a new point-location mechanism, exploiting "Epsilon-cuttings" that are based on vertical decompositions in hyperplane arrangements in high dimensions.
A major side result of the analysis in this paper is a sharper bound on the complexity of the vertical decomposition of such an arrangement (in terms of its dependence on the dimension). We hope that this study will reveal further structural properties of vertical decompositions in hyperplane arrangements