103 research outputs found
If the Current Clique Algorithms are Optimal, so is Valiant's Parser
The CFG recognition problem is: given a context-free grammar
and a string of length , decide if can be obtained from
. This is the most basic parsing question and is a core computer
science problem. Valiant's parser from 1975 solves the problem in
time, where is the matrix multiplication
exponent. Dozens of parsing algorithms have been proposed over the years, yet
Valiant's upper bound remains unbeaten. The best combinatorial algorithms have
mildly subcubic complexity.
Lee (JACM'01) provided evidence that fast matrix multiplication is needed for
CFG parsing, and that very efficient and practical algorithms might be hard or
even impossible to obtain. Lee showed that any algorithm for a more general
parsing problem with running time can
be converted into a surprising subcubic algorithm for Boolean Matrix
Multiplication. Unfortunately, Lee's hardness result required that the grammar
size be . Nothing was known for the more relevant
case of constant size grammars.
In this work, we prove that any improvement on Valiant's algorithm, even for
constant size grammars, either in terms of runtime or by avoiding the
inefficiencies of fast matrix multiplication, would imply a breakthrough
algorithm for the -Clique problem: given a graph on nodes, decide if
there are that form a clique.
Besides classifying the complexity of a fundamental problem, our reduction
has led us to similar lower bounds for more modern and well-studied cubic time
problems for which faster algorithms are highly desirable in practice: RNA
Folding, a central problem in computational biology, and Dyck Language Edit
Distance, answering an open question of Saha (FOCS'14)
Faster Min-Plus Product for Monotone Instances
In this paper, we show that the time complexity of monotone min-plus product
of two matrices is
, where is
the fast matrix multiplication exponent [Alman and Vassilevska Williams 2021].
That is, when is an arbitrary integer matrix and is either row-monotone
or column-monotone with integer elements bounded by , computing the
min-plus product where takes
time, which greatly improves the previous time
bound of [Gu, Polak,
Vassilevska Williams and Xu 2021]. Then by simple reductions, this means the
following problems also have time algorithms:
(1) and are both bounded-difference, that is, the difference between
any two adjacent entries is a constant. The previous results give time
complexities of [Bringmann, Grandoni, Saha and
Vassilevska Williams 2016] and [Chi, Duan and Xie 2022].
(2) is arbitrary and the columns or rows of are bounded-difference.
Previous result gives time complexity of [Bringmann,
Grandoni, Saha and Vassilevska Williams 2016].
(3) The problems reducible to these problems, such as language edit distance,
RNA-folding, scored parsing problem on BD grammars. [Bringmann, Grandoni, Saha
and Vassilevska Williams 2016].
Finally, we also consider the problem of min-plus convolution between two
integral sequences which are monotone and bounded by , and achieve a
running time upper bound of . Previously, this task
requires running time [Chan
and Lewenstein 2015].Comment: 26 page
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size of data that originally has size , and we want to solve a problem with time complexity . The naive strategy of "decompress-and-solve" gives time , whereas "the gold standard" is time : to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The bound for LCS and the bound for Pattern Matching with Wildcards are optimal up to factors, under the Strong Exponential Time Hypothesis. (Here, denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the -Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve
Can we analyze data without decompressing it? As our data keeps growing,
understanding the time complexity of problems on compressed inputs, rather than
in convenient uncompressed forms, becomes more and more relevant. Suppose we
are given a compression of size of data that originally has size , and
we want to solve a problem with time complexity . The naive strategy
of "decompress-and-solve" gives time , whereas "the gold standard" is
time : to analyze the compression as efficiently as if the original data
was small.
We restrict our attention to data in the form of a string (text, files,
genomes, etc.) and study the most ubiquitous tasks. While the challenge might
seem to depend heavily on the specific compression scheme, most methods of
practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be
unified under the elegant notion of Grammar Compressions. A vast literature,
across many disciplines, established this as an influential notion for
Algorithm design.
We introduce a framework for proving (conditional) lower bounds in this
field, allowing us to assess whether decompress-and-solve can be improved, and
by how much. Our main results are:
- The bound for LCS and the
bound for Pattern Matching with Wildcards are optimal up to factors,
under the Strong Exponential Time Hypothesis. (Here, denotes the
uncompressed length of the compressed pattern.)
- Decompress-and-solve is essentially optimal for Context-Free Grammar
Parsing and RNA Folding, under the -Clique conjecture.
- We give an algorithm showing that decompress-and-solve is not optimal for
Disjointness.Comment: Presented at FOCS'17. Full version. 63 page
- …