135 research outputs found

    Tighter Connections Between Formula-SAT and Shaving Logs

    Get PDF
    A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the O(n2)O(n^2) dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to O(n2/log2n)O(n^2/\log^2 n) in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time O(n2/log3n)O(n^2/\log^3 n)? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an O(n2/log1000n)O(n^2/\log^{1000} n) algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an O(n2/log7+εn)O(n^2/\log^{7+\varepsilon} n) runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size ss over nn variables to LCS on sequences of length N=2n/2s1+o(1)N=2^{n/2} \cdot s^{1+o(1)}. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with N=2n/2scN=2^{n/2} \cdot s^c, for some c100c \geq 100

    Fine-grained Complexity Meets IP = PSPACE

    Full text link
    In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from exact to approximate solution for a host of such problems. As one (notable) example, we show that the Closest-LCS-Pair problem (Given two sets of strings AA and BB, compute exactly the maximum LCS(a,b)\textsf{LCS}(a, b) with (a,b)A×B(a, b) \in A \times B) is equivalent to its approximation version (under near-linear time reductions, and with a constant approximation factor). More generally, we identify a class of problems, which we call BP-Pair-Class, comprising both exact and approximate solutions, and show that they are all equivalent under near-linear time reductions. Exploring this class and its properties, we also show: \bullet Under the NC-SETH assumption (a significantly more relaxed assumption than SETH), solving any of the problems in this class requires essentially quadratic time. \bullet Modest improvements on the running time of known algorithms (shaving log factors) would imply that NEXP is not in non-uniform NC1\textsf{NC}^1. \bullet Finally, we leverage our techniques to show new barriers for deterministic approximation algorithms for LCS. At the heart of these new results is a deep connection between interactive proof systems for bounded-space computations and the fine-grained complexity of exact and approximate solutions to problems in P. In particular, our results build on the proof techniques from the classical IP = PSPACE result

    Fine-Grained Complexity Theory: Conditional Lower Bounds for Computational Geometry

    Get PDF
    Fine-grained complexity theory is the area of theoretical computer sciencethat proves conditional lower bounds based on the Strong Exponential TimeHypothesis and similar conjectures. This area has been thriving in the lastdecade, leading to conditionally best-possible algorithms for a wide variety ofproblems on graphs, strings, numbers etc. This article is an introduction tofine-grained lower bounds in computational geometry, with a focus on lowerbounds for polynomial-time problems based on the Orthogonal Vectors Hypothesis.Specifically, we discuss conditional lower bounds for nearest neighbor searchunder the Euclidean distance and Fr\'echet distance.<br

    Subset Sum in Time 2^{n/2} / poly(n)

    Get PDF
    A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) n-input Subset Sum problem that runs in time 2^{(1/2 - c)n} for some constant c > 0. In this paper we give a Subset Sum algorithm with worst-case running time O(2^{n/2} ? n^{-?}) for a constant ? > 0.5023 in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical "meet-in-the-middle" algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time O(2^{n/2}) in these memory models [Horowitz and Sahni, 1974]. Our algorithm combines a number of different techniques, including the "representation method" introduced by Howgrave-Graham and Joux [Howgrave-Graham and Joux, 2010] and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof [Austrin et al., 2016], and Nederlof and W?grzycki [Jesper Nederlof and Karol Wegrzycki, 2021], and "bit-packing" techniques used in the work of Baran, Demaine, and P?tra?cu [Baran et al., 2005] on subquadratic algorithms for 3SUM

    Subpath Queries on Compressed Graphs: A Survey

    Get PDF
    Text indexing is a classical algorithmic problem that has been studied for over four decades: given a text T, pre-process it off-line so that, later, we can quickly count and locate the occurrences of any string (the query pattern) in T in time proportional to the query&rsquo;s length. The earliest optimal-time solution to the problem, the suffix tree, dates back to 1973 and requires up to two orders of magnitude more space than the plain text just to be stored. In the year 2000, two breakthrough works showed that efficient queries can be achieved without this space overhead: a fast index be stored in a space proportional to the text&rsquo;s entropy. These contributions had an enormous impact in bioinformatics: today, virtually any DNA aligner employs compressed indexes. Recent trends considered more powerful compression schemes (dictionary compressors) and generalizations of the problem to labeled graphs: after all, texts can be viewed as labeled directed paths. In turn, since finite state automata can be considered as a particular case of labeled graphs, these findings created a bridge between the fields of compressed indexing and regular language theory, ultimately allowing to index regular languages and promising to shed new light on problems, such as regular expression matching. This survey is a gentle introduction to the main landmarks of the fascinating journey that took us from suffix trees to today&rsquo;s compressed indexes for labeled graphs and regular languages

    A Linear-Time n 0.4-Approximation for Longest Common Subsequence

    Get PDF
    We consider the classic problem of computing the Longest Common Subsequence(LCS) of two strings of length nn. While a simple quadratic algorithm has beenknown for the problem for more than 40 years, no faster algorithm has beenfound despite an extensive effort. The lack of progress on the problem hasrecently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15]and Bringmann and K\"unnemann [FOCS'15] who proved that there is nosubquadratic algorithm unless the Strong Exponential Time Hypothesis fails.This has led the community to look for subquadratic approximation algorithmsfor the problem. Yet, unlike the edit distance problem for which a constant-factorapproximation in almost-linear time is known, very little progress has beenmade on LCS, making it a notoriously difficult problem also in the realm ofapproximation. For the general setting, only a naiveO(nε/2)O(n^{\varepsilon/2})-approximation algorithm with running timeO~(n2ε)\tilde{O}(n^{2-\varepsilon}) has been known, for any constant 0ε10 \varepsilon \le 1. Recently, a breakthrough result by Hajiaghayi, Seddighin,Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields aO(n0.497956)O(n^{0.497956})-approximation in expectation; improving upon the naiveO(n)O(\sqrt{n})-approximation for the first time. In this paper, we provide an algorithm that in time O(n2ε)O(n^{2-\varepsilon})computes an O~(n2ε/5)\tilde{O}(n^{2\varepsilon/5})-approximation with highprobability, for any 00 \tilde{O}(n^{0.4})approximationinlineartime,improvingupontheboundofHajiaghayi,Seddighin,Seddighin,andSun,(2)providesanalgorithmwhoseapproximationscaleswithanysubquadraticrunningtime-approximation in linear time, improving upon the bound ofHajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whoseapproximation scales with any subquadratic running time O(n^{2-\varepsilon}),improvinguponthenaiveboundof,improving upon the naive bound of O(n^{\varepsilon/2})forany for any \varepsilon$,and (3) instead of only in expectation, succeeds with high probability.<br
    corecore