21,658 research outputs found
Fast and Deterministic Constant Factor Approximation Algorithms for LCS Imply New Circuit Lower Bounds
The Longest Common Subsequence (LCS) is one of the most basic similarity measures and it captures important applications in bioinformatics and text analysis. Following the SETH-based nearly-quadratic time lower bounds for LCS from recent years, it is a major open problem to understand the complexity of approximate LCS.
In the last ITCS [AB17] drew an interesting connection between this problem and the area of circuit complexity:
they proved that approximation algorithms for LCS in deterministic truly-subquadratic time imply new circuit lower bounds (E^NP does not have non-uniform linear-size Valiant Series Parallel circuits).
In this work, we strengthen this connection between approximate LCS and circuit complexity by applying the Distributed PCP framework of [ARW17].
We obtain a reduction that holds against much larger approximation factors (super-constant versus 1+o(1)), yields a lower bound for a larger class of circuits (linear-size NC^1), and is also easier to analyze
A Casual Tour Around a Circuit Complexity Bound
I will discuss the recent proof that the complexity class NEXP
(nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial
size. The proof will be described from the perspective of someone trying to
discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News,
September 201
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas
We give the best known pseudorandom generators for two touchstone classes in
unconditional derandomization: an -PRG for the class of size-
depth- circuits with seed length , and an -PRG for the class of -sparse
polynomials with seed length . These results bring the state of the art for
unconditional derandomization of these classes into sharp alignment with the
state of the art for computational hardness for all parameter settings:
improving on the seed lengths of either PRG would require breakthrough progress
on longstanding and notorious circuit lower bounds.
The key enabling ingredient in our approach is a new \emph{pseudorandom
multi-switching lemma}. We derandomize recently-developed
\emph{multi}-switching lemmas, which are powerful generalizations of
H{\aa}stad's switching lemma that deal with \emph{families} of depth-two
circuits. Our pseudorandom multi-switching lemma---a randomness-efficient
algorithm for sampling restrictions that simultaneously simplify all circuits
in a family---achieves the parameters obtained by the (full randomness)
multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and
H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into
the optimality (given current circuit lower bounds) of our PRGs for
and sparse polynomials
- …