302 research outputs found
Efficient deterministic approximate counting for low-degree polynomial threshold functions
We give a deterministic algorithm for approximately counting satisfying
assignments of a degree- polynomial threshold function (PTF). Given a
degree- input polynomial over and a parameter
, our algorithm approximates to within an additive in time . (Any sort of efficient multiplicative approximation is
impossible even for randomized algorithms assuming .) Note that the
running time of our algorithm (as a function of , the number of
coefficients of a degree- PTF) is a \emph{fixed} polynomial. The fastest
previous algorithm for this problem (due to Kane), based on constructions of
unconditional pseudorandom generators for degree- PTFs, runs in time
for all .
The key novel contributions of this work are: A new multivariate central
limit theorem, proved using tools from Malliavin calculus and Stein's Method.
This new CLT shows that any collection of Gaussian polynomials with small
eigenvalues must have a joint distribution which is very close to a
multidimensional Gaussian distribution. A new decomposition of low-degree
multilinear polynomials over Gaussian inputs. Roughly speaking we show that (up
to some small error) any such polynomial can be decomposed into a bounded
number of multilinear polynomials all of which have extremely small
eigenvalues. We use these new ingredients to give a deterministic algorithm for
a Gaussian-space version of the approximate counting problem, and then employ
standard techniques for working with low-degree PTFs (invariance principles and
regularity lemmas) to reduce the original approximate counting problem over the
Boolean hypercube to the Gaussian version
Explicit Optimal Hardness via Gaussian stability results
The results of Raghavendra (2008) show that assuming Khot's Unique Games
Conjecture (2002), for every constraint satisfaction problem there exists a
generic semi-definite program that achieves the optimal approximation factor.
This result is existential as it does not provide an explicit optimal rounding
procedure nor does it allow to calculate exactly the Unique Games hardness of
the problem.
Obtaining an explicit optimal approximation scheme and the corresponding
approximation factor is a difficult challenge for each specific approximation
problem. An approach for determining the exact approximation factor and the
corresponding optimal rounding was established in the analysis of MAX-CUT (KKMO
2004) and the use of the Invariance Principle (MOO 2005). However, this
approach crucially relies on results explicitly proving optimal partitions in
Gaussian space. Until recently, Borell's result (Borell 1985) was the only
non-trivial Gaussian partition result known.
In this paper we derive the first explicit optimal approximation algorithm
and the corresponding approximation factor using a new result on Gaussian
partitions due to Isaksson and Mossel (2012). This Gaussian result allows us to
determine exactly the Unique Games Hardness of MAX-3-EQUAL. In particular, our
results show that Zwick algorithm for this problem achieves the optimal
approximation factor and prove that the approximation achieved by the algorithm
is as conjectured by Zwick.
We further use the previously known optimal Gaussian partitions results to
obtain a new Unique Games Hardness factor for MAX-k-CSP : Using the well known
fact that jointly normal pairwise independent random variables are fully
independent, we show that the the UGC hardness of Max-k-CSP is , improving on results of Austrin and Mossel (2009)
Majority is Stablest : Discrete and SoS
The Majority is Stablest Theorem has numerous applications in hardness of
approximation and social choice theory. We give a new proof of the Majority is
Stablest Theorem by induction on the dimension of the discrete cube. Unlike the
previous proof, it uses neither the "invariance principle" nor Borell's result
in Gaussian space. The new proof is general enough to include all previous
variants of majority is stablest such as "it ain't over until it's over" and
"Majority is most predictable". Moreover, the new proof allows us to derive a
proof of Majority is Stablest in a constant level of the Sum of Squares
hierarchy.This implies in particular that Khot-Vishnoi instance of Max-Cut does
not provide a gap instance for the Lasserre hierarchy
Noisy population recovery in polynomial time
In the noisy population recovery problem of Dvir et al., the goal is to learn
an unknown distribution on binary strings of length from noisy samples.
For some parameter , a noisy sample is generated by flipping
each coordinate of a sample from independently with probability
. We assume an upper bound on the size of the support of the
distribution, and the goal is to estimate the probability of any string to
within some given error . It is known that the algorithmic
complexity and sample complexity of this problem are polynomially related to
each other.
We show that for , the sample complexity (and hence the algorithmic
complexity) is bounded by a polynomial in , and
improving upon the previous best result of due to Lovett and Zhang.
Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated}
version of M\"{o}bius inversion. In turn, the latter crucially uses the
construction of \emph{robust local inverse} due to Moitra and Saks
Non interactive simulation of correlated distributions is decidable
A basic problem in information theory is the following: Let be an arbitrary distribution where the marginals
and are (potentially) correlated. Let Alice and Bob
be two players where Alice gets samples and Bob gets
samples and for all , . What
joint distributions can be simulated by Alice and Bob without any
interaction?
Classical works in information theory by G{\'a}cs-K{\"o}rner and Wyner answer
this question when at least one of or is the
distribution on where each marginal is unbiased and
identical. However, other than this special case, the answer to this question
is understood in very few cases. Recently, Ghazi, Kamath and Sudan showed that
this problem is decidable for supported on . We extend their result to supported on any finite
alphabet.
We rely on recent results in Gaussian geometry (by the authors) as well as a
new \emph{smoothing argument} inspired by the method of \emph{boosting} from
learning theory and potential function arguments from complexity theory and
additive combinatorics.Comment: The reduction for non-interactive simulation for general source
distribution to the Gaussian case was incorrect in the previous version. It
has been rectified no
- …