135 research outputs found
Tighter Connections Between Formula-SAT and Shaving Logs
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time ? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size over variables to LCS on sequences of length . Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with , for some
Fine-grained Complexity Meets IP = PSPACE
In this paper we study the fine-grained complexity of finding exact and
approximate solutions to problems in P. Our main contribution is showing
reductions from exact to approximate solution for a host of such problems.
As one (notable) example, we show that the Closest-LCS-Pair problem (Given
two sets of strings and , compute exactly the maximum with ) is equivalent to its approximation version
(under near-linear time reductions, and with a constant approximation factor).
More generally, we identify a class of problems, which we call BP-Pair-Class,
comprising both exact and approximate solutions, and show that they are all
equivalent under near-linear time reductions.
Exploring this class and its properties, we also show:
Under the NC-SETH assumption (a significantly more relaxed
assumption than SETH), solving any of the problems in this class requires
essentially quadratic time.
Modest improvements on the running time of known algorithms
(shaving log factors) would imply that NEXP is not in non-uniform
.
Finally, we leverage our techniques to show new barriers for
deterministic approximation algorithms for LCS.
At the heart of these new results is a deep connection between interactive
proof systems for bounded-space computations and the fine-grained complexity of
exact and approximate solutions to problems in P. In particular, our results
build on the proof techniques from the classical IP = PSPACE result
Fine-Grained Complexity Theory: Conditional Lower Bounds for Computational Geometry
Fine-grained complexity theory is the area of theoretical computer sciencethat proves conditional lower bounds based on the Strong Exponential TimeHypothesis and similar conjectures. This area has been thriving in the lastdecade, leading to conditionally best-possible algorithms for a wide variety ofproblems on graphs, strings, numbers etc. This article is an introduction tofine-grained lower bounds in computational geometry, with a focus on lowerbounds for polynomial-time problems based on the Orthogonal Vectors Hypothesis.Specifically, we discuss conditional lower bounds for nearest neighbor searchunder the Euclidean distance and Fr\'echet distance.<br
Subset Sum in Time 2^{n/2} / poly(n)
A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) n-input Subset Sum problem that runs in time 2^{(1/2 - c)n} for some constant c > 0. In this paper we give a Subset Sum algorithm with worst-case running time O(2^{n/2} ? n^{-?}) for a constant ? > 0.5023 in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical "meet-in-the-middle" algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time O(2^{n/2}) in these memory models [Horowitz and Sahni, 1974].
Our algorithm combines a number of different techniques, including the "representation method" introduced by Howgrave-Graham and Joux [Howgrave-Graham and Joux, 2010] and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof [Austrin et al., 2016], and Nederlof and W?grzycki [Jesper Nederlof and Karol Wegrzycki, 2021], and "bit-packing" techniques used in the work of Baran, Demaine, and P?tra?cu [Baran et al., 2005] on subquadratic algorithms for 3SUM
Subpath Queries on Compressed Graphs: A Survey
Text indexing is a classical algorithmic problem that has been studied for over four decades: given a text T, pre-process it off-line so that, later, we can quickly count and locate the occurrences of any string (the query pattern) in T in time proportional to the query’s length. The earliest optimal-time solution to the problem, the suffix tree, dates back to 1973 and requires up to two orders of magnitude more space than the plain text just to be stored. In the year 2000, two breakthrough works showed that efficient queries can be achieved without this space overhead: a fast index be stored in a space proportional to the text’s entropy. These contributions had an enormous impact in bioinformatics: today, virtually any DNA aligner employs compressed indexes. Recent trends considered more powerful compression schemes (dictionary compressors) and generalizations of the problem to labeled graphs: after all, texts can be viewed as labeled directed paths. In turn, since finite state automata can be considered as a particular case of labeled graphs, these findings created a bridge between the fields of compressed indexing and regular language theory, ultimately allowing to index regular languages and promising to shed new light on problems, such as regular expression matching. This survey is a gentle introduction to the main landmarks of the fascinating journey that took us from suffix trees to today’s compressed indexes for labeled graphs and regular languages
A Linear-Time n 0.4-Approximation for Longest Common Subsequence
We consider the classic problem of computing the Longest Common Subsequence(LCS) of two strings of length . While a simple quadratic algorithm has beenknown for the problem for more than 40 years, no faster algorithm has beenfound despite an extensive effort. The lack of progress on the problem hasrecently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15]and Bringmann and K\"unnemann [FOCS'15] who proved that there is nosubquadratic algorithm unless the Strong Exponential Time Hypothesis fails.This has led the community to look for subquadratic approximation algorithmsfor the problem. Yet, unlike the edit distance problem for which a constant-factorapproximation in almost-linear time is known, very little progress has beenmade on LCS, making it a notoriously difficult problem also in the realm ofapproximation. For the general setting, only a naive-approximation algorithm with running time has been known, for any constant . Recently, a breakthrough result by Hajiaghayi, Seddighin,Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields a-approximation in expectation; improving upon the naive-approximation for the first time. In this paper, we provide an algorithm that in time computes an -approximation with highprobability, for any \tilde{O}(n^{0.4})O(n^{2-\varepsilon})O(n^{\varepsilon/2})\varepsilon$,and (3) instead of only in expectation, succeeds with high probability.<br
- …