8,096 research outputs found
Efficient deterministic approximate counting for low-degree polynomial threshold functions
We give a deterministic algorithm for approximately counting satisfying
assignments of a degree- polynomial threshold function (PTF). Given a
degree- input polynomial over and a parameter
, our algorithm approximates to within an additive in time . (Any sort of efficient multiplicative approximation is
impossible even for randomized algorithms assuming .) Note that the
running time of our algorithm (as a function of , the number of
coefficients of a degree- PTF) is a \emph{fixed} polynomial. The fastest
previous algorithm for this problem (due to Kane), based on constructions of
unconditional pseudorandom generators for degree- PTFs, runs in time
for all .
The key novel contributions of this work are: A new multivariate central
limit theorem, proved using tools from Malliavin calculus and Stein's Method.
This new CLT shows that any collection of Gaussian polynomials with small
eigenvalues must have a joint distribution which is very close to a
multidimensional Gaussian distribution. A new decomposition of low-degree
multilinear polynomials over Gaussian inputs. Roughly speaking we show that (up
to some small error) any such polynomial can be decomposed into a bounded
number of multilinear polynomials all of which have extremely small
eigenvalues. We use these new ingredients to give a deterministic algorithm for
a Gaussian-space version of the approximate counting problem, and then employ
standard techniques for working with low-degree PTFs (invariance principles and
regularity lemmas) to reduce the original approximate counting problem over the
Boolean hypercube to the Gaussian version
Deterministic polynomial-time approximation algorithms for partition functions and graph polynomials
In this paper we show a new way of constructing deterministic polynomial-time
approximation algorithms for computing complex-valued evaluations of a large
class of graph polynomials on bounded degree graphs. In particular, our
approach works for the Tutte polynomial and independence polynomial, as well as
partition functions of complex-valued spin and edge-coloring models.
More specifically, we define a large class of graph polynomials
and show that if and there is a disk centered at zero in the
complex plane such that does not vanish on for all bounded degree
graphs , then for each in the interior of there exists a
deterministic polynomial-time approximation algorithm for evaluating at
. This gives an explicit connection between absence of zeros of graph
polynomials and the existence of efficient approximation algorithms, allowing
us to show new relationships between well-known conjectures.
Our work builds on a recent line of work initiated by. Barvinok, which
provides a new algorithmic approach besides the existing Markov chain Monte
Carlo method and the correlation decay method for these types of problems.Comment: 27 pages; some changes have been made based on referee comments. In
particular a tiny error in Proposition 4.4 has been fixed. The introduction
and concluding remarks have also been rewritten to incorporate the most
recent developments. Accepted for publication in SIAM Journal on Computatio
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
Generalised Pattern Matching Revisited
In the problem of
[STOC'94, Muthukrishnan and Palem], we are given a text of length over
an alphabet , a pattern of length over an alphabet
, and a matching relationship ,
and must return all substrings of that match (reporting) or the number
of mismatches between each substring of of length and (counting).
In this work, we improve over all previously known algorithms for this problem
for various parameters describing the input instance:
* being the maximum number of characters that match a fixed
character,
* being the number of pairs of matching characters,
* being the total number of disjoint intervals of characters
that match the characters of the pattern .
At the heart of our new deterministic upper bounds for and
lies a faster construction of superimposed codes, which solves
an open problem posed in [FOCS'97, Indyk] and can be of independent interest.
To conclude, we demonstrate first lower bounds for . We start by
showing that any deterministic or Monte Carlo algorithm for must
use time, and then proceed to show higher lower bounds
for combinatorial algorithms. These bounds show that our algorithms are almost
optimal, unless a radically new approach is developed
On the Computational Power of Radio Channels
Radio networks can be a challenging platform for which to develop distributed algorithms, because the network nodes must contend for a shared channel. In some cases, though, the shared medium is an advantage rather than a disadvantage: for example, many radio network algorithms cleverly use the shared channel to approximate the degree of a node, or estimate the contention. In this paper we ask how far the inherent power of a shared radio channel goes, and whether it can efficiently compute "classicaly hard" functions such as Majority, Approximate Sum, and Parity.
Using techniques from circuit complexity, we show that in many cases, the answer is "no". We show that simple radio channels, such as the beeping model or the channel with collision-detection, can be approximated by a low-degree polynomial, which makes them subject to known lower bounds on functions such as Parity and Majority; we obtain round lower bounds of the form Omega(n^{delta}) on these functions, for delta in (0,1). Next, we use the technique of random restrictions, used to prove AC^0 lower bounds, to prove a tight lower bound of Omega(1/epsilon^2) on computing a (1 +/- epsilon)-approximation to the sum of the nodes\u27 inputs. Our techniques are general, and apply to many types of radio channels studied in the literature
- …