36 research outputs found

    Subquadratic Algorithms for Algebraic Generalizations of 3SUM

    Get PDF
    info:eu-repo/semantics/publishe

    Solving kk-SUM using few linear queries

    Full text link
    The kk-SUM problem is given nn input real numbers to determine whether any kk of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within PP, and it is in particular open whether it admits an algorithm of complexity O(nc)O(n^c) with c<k2c<\lceil \frac{k}{2} \rceil. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n3log3n)O(n^3\log^3 n) solving kk-SUM. Furthermore, we show that there exists a randomized algorithm that runs in O~(nk2+8)\tilde{O}(n^{\lceil \frac{k}{2} \rceil+8}) time, and performs O(n3log3n)O(n^3\log^3 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8+8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of kk. The O(n3log3n)O(n^3\log^3 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving kk-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-PP. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)o(n)-linear decision trees of depth o(n4)o(n^4)

    Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails

    Full text link
    The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Frechet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n^2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-delta)^N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Frechet distance cannot be computed in strongly subquadratic time, i.e., in time O(n^{2-delta}) for any delta > 0. This means that finding faster algorithms for the Frechet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Frechet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves)

    Solving k-SUM Using Few Linear Queries

    Get PDF
    The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<d where d is the ceiling of k/2. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^2 n) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in ~O(n^{d+8}) time, and performs O(n^3 log^2 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^2 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-a-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth ~O(n^3) for the k-SUM problem

    A subquadratic algorithm for 3XOR

    Get PDF
    Given a set XX of nn binary words of equal length ww, the 3XOR problem asks for three elements a,b,cXa, b, c \in X such that ab=ca \oplus b=c, where \oplus denotes the bitwise XOR operation. The problem can be easily solved on a word RAM with word length ww in time O(n2logn)O(n^2 \log{n}). Using Han's fast integer sorting algorithm (2002/2004) this can be reduced to O(n2loglogn)O(n^2 \log{\log{n}}). With randomization or a sophisticated deterministic dictionary construction, creating a hash table for XX with constant lookup time leads to an algorithm with (expected) running time O(n2)O(n^2). At present, seemingly no faster algorithms are known. We present a surprisingly simple deterministic, quadratic time algorithm for 3XOR. Its core is a version of the Patricia trie for XX, which makes it possible to traverse the set aXa \oplus X in ascending order for arbitrary a{0,1}wa\in \{0, 1\}^{w} in linear time. Furthermore, we describe a randomized algorithm for 3XOR with expected running time O(n2min{log3w/w,(loglogn)2/log2n})O(n^2\cdot\min\{\log^3{w}/w, (\log\log{n})^2/\log^2 n\}). The algorithm transfers techniques to our setting that were used by Baran, Demaine, and P\u{a}tra\c{s}cu (2005/2008) for solving the related int3SUM problem (the same problem with integer addition in place of binary XOR) in expected time o(n2)o(n^2). As suggested by Jafargholi and Viola (2016), linear hash functions are employed. The latter authors also showed that assuming 3XOR needs expected running time n2o(1)n^{2-o(1)} one can prove conditional lower bounds for triangle enumeration just as with 3SUM. We demonstrate that 3XOR can be reduced to other problems as well, treating the examples offline SetDisjointness and offline SetIntersection, which were studied for 3SUM by Kopelowitz, Pettie, and Porat (2016)

    Fine-Grained Complexity: Exploring Reductions and their Properties

    Get PDF
    Η σχεδίαση αλγορίθμων αποτελεί ένα απο τα κύρια θέματα ενδιαφέροντος για τον τομέα της Πληροφορικής. Παρά τα πολλά αποτελέσματα σε ορισμένους τομείς, η προσέγγιση αυτή έχει πετύχει κάποια πρακτικά αδιέξοδα που έχουν αποδειχτεί προβληματικά στην πρόοδο του τομέα. Επίσης, οι κλασικές πρακτικές Υπολογιστικής Πολυπλοκότητας δεν ήταν σε θέση να παρακάμψουν αυτά τα εμπόδια. Η κατανόηση της δυσκολίας του κάθε προβλήματος δεν είναι τετριμμένη. Η Ραφιναρισμένη Πολυπλοκότητα παρέχει νέες προ-οπτικές για τα κλασικά προβλήματα, με αποτέλεσμα σταθερούς δεσμούς μεταξύ γνωστών εικασιών στην πολυπλοκότητα και την σχεδίαση αλγορίθμων. Χρησιμεύει επίσης ως εργα-λείο για να αποδείξει τα υπο όρους κατώτατα όρια για προβλήματα πολυωνυμικής χρονικής πολυπλοκότητας, ένα πεδίο που έχει σημειώσει πολύ λίγη πρόοδο μέχρι τώρα. Οι δημοφι-λείς υποθέσεις/παραδοχές όπως το SETH, το OVH, το 3SUM, και το APSP, δίνουν πολλά φράγματα που δεν έχουν ακόμα αποδειχθεί με κλασικές τεχνικές και παρέχουν μια νέα κατανόηση της δομής και της εντροπίας των προβλημάτων γενικά. Σκοπός αυτής της εργασίας είναι να συμβάλει στην εδραίωση του πλαισίου για αναγωγές από κάθε εικασία και να διερευνήσει την διαρθρωτική διαφορά μεταξύ των προβλημάτων σε κάθε περίπτωση.Algorithmic design has been one of the main subjects of interest for Computer science. While very effective in some areas, this approach has been met with some practical dead ends that have been very problematic in the progress of the field. Classical Computational Complexity practices have also not been able to bypass these blocks. Understanding the hardness of each problem is not trivial. Fine-Grained Complexity provides new perspectives on classic problems, resulting to solid links between famous conjectures in Complexity, and Algorithmic design. It serves as a tool to prove conditional lower bounds for problems with polynomial time complexity, a field that had seen very little progress until now. Popular conjectures such as SETH, k-OV, 3SUM, and APSP, imply many bounds that have yet to be proven using classic techniques, and provide a new understanding of the structure and entropy of problems in general. The aim of this thesis is to contribute towards solidifying the framework for reductions from each conjecture, and to explore the structural difference between the problems in each cas

    Hardness of Approximation in {P} via Short Cycle Removal: {C}ycle Detection, Distance Oracles, and Beyond

    Get PDF

    {SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path

    Get PDF
    Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial O(T)O^{*}(T)-time algorithm for Subset-Sum on nn numbers and target TT cannot be improved to time T1ε2o(n)T^{1-\varepsilon}\cdot 2^{o(n)} for any ε>0\varepsilon>0, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of NN given instances of Subset-Sum is a YES instance requires time (NT)1o(1)(N T)^{1-o(1)}. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with mm edges and edge lengths bounded by LL, we show that the O(Lm)O(Lm) pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to O~(L+m)\tilde{O}(L+m), in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017)

    Proofs of Work from Worst-Case Assumptions

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC \u2717), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept\u27 of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs\u27 structure can be exploited in ways previous heuristic PoWs could not. This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS \u2716)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC \u2717) can be modified to have much faster verification time, can be proved in zero knowledge, and more. Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs
    corecore