24 research outputs found

    Polygon Placement Revisited: (Degree of Freedom + 1)-SUM Hardness and an Improvement via Offline Dynamic Rectangle Union

    Get PDF
    We revisit the classical problem of determining the largest copy of a simple polygon PP that can be placed into a simple polygon QQ. Despite significant effort, known algorithms require high polynomial running times. (Barequet and Har-Peled, 2001) give a lower bound of n2o(1)n^{2-o(1)} under the 3SUM conjecture when PP and QQ are (convex) polygons with Θ(n)\Theta(n) vertices each. This leaves open whether we can establish (1) hardness beyond quadratic time and (2) any superlinear bound for constant-sized PP or QQ. In this paper, we affirmatively answer these questions under the kkSUM conjecture, proving natural hardness results that increase with each degree of freedom (scaling, xx-translation, yy-translation, rotation): (1) Finding the largest copy of PP that can be xx-translated into QQ requires time n2o(1)n^{2-o(1)} under the 3SUM conjecture. (2) Finding the largest copy of PP that can be arbitrarily translated into QQ requires time n2o(1)n^{2-o(1)} under the 4SUM conjecture. (3) The above lower bounds are almost tight when one of the polygons is of constant size: we obtain an O~((pq)2.5)\tilde O((pq)^{2.5})-time algorithm for orthogonal polygons P,QP,Q with pp and qq vertices, respectively. (4) Finding the largest copy of PP that can be arbitrarily rotated and translated into QQ requires time n3o(1)n^{3-o(1)} under the 5SUM conjecture. We are not aware of any other such natural ((degree of freedom +1)+ 1)-SUM hardness for a geometric optimization problem

    Approximating the Maximum Overlap of Polygons under Translation

    Full text link
    Let PP and QQ be two simple polygons in the plane of total complexity nn, each of which can be decomposed into at most kk convex parts. We present an (1ε)(1-\varepsilon)-approximation algorithm, for finding the translation of QQ, which maximizes its area of overlap with PP. Our algorithm runs in O(cn)O(c n) time, where cc is a constant that depends only on kk and ε\varepsilon. This suggest that for polygons that are "close" to being convex, the problem can be solved (approximately), in near linear time

    Translating Hausdorff is Hard: Fine-Grained Lower Bounds for Hausdorff Distance Under Translation

    Get PDF
    Computing the similarity of two point sets is a ubiquitous task in medical imaging, geometric shape comparison, trajectory analysis, and many more settings. Arguably the most basic distance measure for this task is the Hausdorff distance, which assigns to each point from one set the closest point in the other set and then evaluates the maximum distance of any assigned pair. A drawback is that this distance measure is not translational invariant, that is, comparing two objects just according to their shape while disregarding their position in space is impossible. Fortunately, there is a canonical translational invariant version, the Hausdorff distance under translation, which minimizes the Hausdorff distance over all translations of one of the point sets. For point sets of size nn and mm, the Hausdorff distance under translation can be computed in time O~(nm)\tilde O(nm) for the L1L_1 and LL_\infty norm [Chew, Kedem SWAT'92] and O~(nm(n+m))\tilde O(nm (n+m)) for the L2L_2 norm [Huttenlocher, Kedem, Sharir DCG'93]. As these bounds have not been improved for over 25 years, in this paper we approach the Hausdorff distance under translation from the perspective of fine-grained complexity theory. We show (i) a matching lower bound of (nm)1o(1)(nm)^{1-o(1)} for L1L_1 and LL_\infty (and all other LpL_p norms) assuming the Orthogonal Vectors Hypothesis and (ii) a matching lower bound of n2o(1)n^{2-o(1)} for L2L_2 in the imbalanced case of m=O(1)m = O(1) assuming the 3SUM Hypothesis

    Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

    Get PDF
    Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem AA with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic

    Improved Bounds for 3SUM, kk-SUM, and Linear Degeneracy

    Get PDF
    Given a set of nn real numbers, the 3SUM problem is to decide whether there are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund and Pettie [FOCS'14], a simple Θ(n2)\Theta(n^2)-time deterministic algorithm for this problem was conjectured to be optimal. Over the years many algorithmic problems have been shown to be reducible from the 3SUM problem or its variants, including the more generalized forms of the problem, such as kk-SUM and kk-variate linear degeneracy testing (kk-LDT). The conjectured hardness of these problems have become extremely popular for basing conditional lower bounds for numerous algorithmic problems in P. In this paper, we show that the randomized 44-linear decision tree complexity of 3SUM is O(n3/2)O(n^{3/2}), and that the randomized (2k2)(2k-2)-linear decision tree complexity of kk-SUM and kk-LDT is O(nk/2)O(n^{k/2}), for any odd k3k\ge 3. These bounds improve (albeit randomized) the corresponding O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and O(nk/2logn)O(n^{k/2}\sqrt{\log n}) decision tree bounds obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized randomized variant of fractional cascading data structure. Additionally, we give another deterministic algorithm for 3SUM that runs in O(n2loglogn/logn)O(n^2 \log\log n / \log n ) time. The latter bound matches a recent independent bound by Freund [Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use of word-RAM model

    Threesomes, Degenerates, and Love Triangles

    Full text link
    The 3SUM problem is to decide, given a set of nn real numbers, whether any three sum to zero. It is widely conjectured that a trivial O(n2)O(n^2)-time algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ω(n2)\Omega(n^2) lower bounds on numerous problems in computational geometry and a variant of the conjecture implies strong lower bounds on triangle enumeration, dynamic graph algorithms, and string matching data structures. In this paper we refute the 3SUM conjecture. We prove that the decision tree complexity of 3SUM is O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and give two subquadratic 3SUM algorithms, a deterministic one running in O(n2/(logn/loglogn)2/3)O(n^2 / (\log n/\log\log n)^{2/3}) time and a randomized one running in O(n2(loglogn)2/logn)O(n^2 (\log\log n)^2 / \log n) time with high probability. Our results lead directly to improved bounds for kk-variate linear degeneracy testing for all odd k3k\ge 3. The problem is to decide, given a linear function f(x1,,xk)=α0+1ikαixif(x_1,\ldots,x_k) = \alpha_0 + \sum_{1\le i\le k} \alpha_i x_i and a set ARA \subset \mathbb{R}, whether 0f(Ak)0\in f(A^k). We show the decision tree complexity of this problem is O(nk/2logn)O(n^{k/2}\sqrt{\log n}). Finally, we give a subcubic algorithm for a generalization of the (min,+)(\min,+)-product over real-valued matrices and apply it to the problem of finding zero-weight triangles in weighted graphs. We give a depth-O(n5/2logn)O(n^{5/2}\sqrt{\log n}) decision tree for this problem, as well as an algorithm running in time O(n3(loglogn)2/logn)O(n^3 (\log\log n)^2/\log n)

    Solving kk-SUM using few linear queries

    Full text link
    The kk-SUM problem is given nn input real numbers to determine whether any kk of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within PP, and it is in particular open whether it admits an algorithm of complexity O(nc)O(n^c) with c<k2c<\lceil \frac{k}{2} \rceil. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n3log3n)O(n^3\log^3 n) solving kk-SUM. Furthermore, we show that there exists a randomized algorithm that runs in O~(nk2+8)\tilde{O}(n^{\lceil \frac{k}{2} \rceil+8}) time, and performs O(n3log3n)O(n^3\log^3 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8+8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of kk. The O(n3log3n)O(n^3\log^3 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving kk-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-PP. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)o(n)-linear decision trees of depth o(n4)o(n^4)

    Geometric Optimization Problem Solving: Matching Sets of Line Segments and Multi-robot Path Planning

    Get PDF
    Department of Computer Science and EngineeringWe study two geometric optimization problems: Line segments pattern matching and multi-robot path planning. We give approximation algorithms for matching two sets of line segments in constant dimension. We consider several versions of the problem: Hausdorff distance, bottleneck distance and largest common subset. We study these similarity measures under several sets of transformations: translations in arbitrary dimension, rotations about a fixed point and rigid motions in two dimensions. As opposed to previous theoretical work on this problem, we match segments individually, in other words we regard our two input sets as sets of segments rather than unions of segments. Then we consider a multi-robot path planning problem. A collection of square robots need to move on the integer grid, from their given starting points to their target points, and without collision between robots, or between robots and a set of input obstacles. We designed and implemented three algorithms for this problem. First, we computed a feasible solution by placing middle-points outside of the minimum bounding box of the starting positions, the target positions and the obstacles, and moving each robot from its starting point to its target point through a middle-point. Second, we applied a simple local search approach where we repeatedly delete and insert again a random robot through an optimal path. It improves the quality of the solution, as the robots no longer need to go through the middle-points. Finally, we used simulated annealing to further improve this feasible solution.ope

    Fine-grained complexity and algorithm engineering of geometric similarity measures

    Get PDF
    Point sets and sequences are fundamental geometric objects that arise in any application that considers movement data, geometric shapes, and many more. A crucial task on these objects is to measure their similarity. Therefore, this thesis presents results on algorithms, complexity lower bounds, and algorithm engineering of the most important point set and sequence similarity measures like the Fréchet distance, the Fréchet distance under translation, and the Hausdorff distance under translation. As an extension to the mere computation of similarity, also the approximate near neighbor problem for the continuous Fréchet distance on time series is considered and matching upper and lower bounds are shown.Punktmengen und Sequenzen sind fundamentale geometrische Objekte, welche in vielen Anwendungen auftauchen, insbesondere in solchen die Bewegungsdaten, geometrische Formen, und ähnliche Daten verarbeiten. Ein wichtiger Bestandteil dieser Anwendungen ist die Berechnung der Ähnlichkeit von Objekten. Diese Dissertation präsentiert Resultate, genauer gesagt Algorithmen, untere Komplexitätsschranken und Algorithm Engineering der wichtigsten Ähnlichkeitsmaße für Punktmengen und Sequenzen, wie zum Beispiel Fréchetdistanz, Fréchetdistanz unter Translation und Hausdorffdistanz unter Translation. Als eine Erweiterung der bloßen Berechnung von Ähnlichkeit betrachten wir auch das Near Neighbor Problem für die kontinuierliche Fréchetdistanz auf Zeitfolgen und zeigen obere und untere Schranken dafür

    A Nearly Quadratic Bound for the Decision Tree Complexity of k-SUM

    Get PDF
    We show that the k-SUM problem can be solved by a linear decision tree of depth O(n^2 log^2 n),improving the recent bound O(n^3 log^3 n) of Cardinal et al. Our bound depends linearly on k, and allows us to conclude that the number of linear queries required to decide the n-dimensional Knapsack or SubsetSum problems is only O(n^3 log n), improving the currently best known bounds by a factor of n. Our algorithm extends to the RAM model, showing that the k-SUM problem can be solved in expected polynomial time, for any fixed k, with the above bound on the number of linear queries. Our approach relies on a new point-location mechanism, exploiting "Epsilon-cuttings" that are based on vertical decompositions in hyperplane arrangements in high dimensions. A major side result of the analysis in this paper is a sharper bound on the complexity of the vertical decomposition of such an arrangement (in terms of its dependence on the dimension). We hope that this study will reveal further structural properties of vertical decompositions in hyperplane arrangements
    corecore