13,505 research outputs found
Immunity and Simplicity for Exact Counting and Other Counting Classes
Ko [RAIRO 24, 1990] and Bruschi [TCS 102, 1992] showed that in some
relativized world, PSPACE (in fact, ParityP) contains a set that is immune to
the polynomial hierarchy (PH). In this paper, we study and settle the question
of (relativized) separations with immunity for PH and the counting classes PP,
C_{=}P, and ParityP in all possible pairwise combinations. Our main result is
that there is an oracle A relative to which C_{=}P contains a set that is
immune to BPP^{ParityP}. In particular, this C_{=}P^A set is immune to PH^{A}
and ParityP^{A}. Strengthening results of Tor\'{a}n [J.ACM 38, 1991] and Green
[IPL 37, 1991], we also show that, in suitable relativizations, NP contains a
C_{=}P-immune set, and ParityP contains a PP^{PH}-immune set. This implies the
existence of a C_{=}P^{B}-simple set for some oracle B, which extends results
of Balc\'{a}zar et al. [SIAM J.Comp. 14, 1985; RAIRO 22, 1988] and provides the
first example of a simple set in a class not known to be contained in PH. Our
proof technique requires a circuit lower bound for ``exact counting'' that is
derived from Razborov's [Mat. Zametki 41, 1987] lower bound for majority.Comment: 20 page
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
Polylogarithmic Cuts in Models of V^0
We study initial cuts of models of weak two-sorted Bounded Arithmetics with
respect to the strength of their theories and show that these theories are
stronger than the original one. More explicitly we will see that
polylogarithmic cuts of models of are models of
by formalizing a proof of Nepomnjascij's Theorem in such cuts. This is a
strengthening of a result by Paris and Wilkie. We can then exploit our result
in Proof Complexity to observe that Frege proof systems can be sub
exponentially simulated by bounded depth Frege proof systems. This result has
recently been obtained by Filmus, Pitassi and Santhanam in a direct proof. As
an interesting observation we also obtain an average case separation of
Resolution from AC0-Frege by applying a recent result with Tzameret.Comment: 16 page
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde
Faster all-pairs shortest paths via circuit complexity
We present a new randomized method for computing the min-plus product
(a.k.a., tropical product) of two matrices, yielding a faster
algorithm for solving the all-pairs shortest path problem (APSP) in dense
-node directed graphs with arbitrary edge weights. On the real RAM, where
additions and comparisons of reals are unit cost (but all other operations have
typical logarithmic cost), the algorithm runs in time
and is correct with high probability.
On the word RAM, the algorithm runs in time for edge weights in . Prior algorithms used either time for
various , or time for various
and .
The new algorithm applies a tool from circuit complexity, namely the
Razborov-Smolensky polynomials for approximately representing
circuits, to efficiently reduce a matrix product over the algebra to
a relatively small number of rectangular matrix products over ,
each of which are computable using a particularly efficient method due to
Coppersmith. We also give a deterministic version of the algorithm running in
time for some , which utilizes the
Yao-Beigel-Tarui translation of circuits into "nice" depth-two
circuits.Comment: 24 pages. Updated version now has slightly faster running time. To
appear in ACM Symposium on Theory of Computing (STOC), 201
- …