305 research outputs found

    Efficient deterministic approximate counting for low-degree polynomial threshold functions

    Full text link
    We give a deterministic algorithm for approximately counting satisfying assignments of a degree-dd polynomial threshold function (PTF). Given a degree-dd input polynomial p(x1,,xn)p(x_1,\dots,x_n) over RnR^n and a parameter ϵ>0\epsilon> 0, our algorithm approximates Prx{1,1}n[p(x)0]\Pr_{x \sim \{-1,1\}^n}[p(x) \geq 0] to within an additive ±ϵ\pm \epsilon in time Od,ϵ(1)poly(nd)O_{d,\epsilon}(1)\cdot \mathop{poly}(n^d). (Any sort of efficient multiplicative approximation is impossible even for randomized algorithms assuming NPRPNP\not=RP.) Note that the running time of our algorithm (as a function of ndn^d, the number of coefficients of a degree-dd PTF) is a \emph{fixed} polynomial. The fastest previous algorithm for this problem (due to Kane), based on constructions of unconditional pseudorandom generators for degree-dd PTFs, runs in time nOd,c(1)ϵcn^{O_{d,c}(1) \cdot \epsilon^{-c}} for all c>0c > 0. The key novel contributions of this work are: A new multivariate central limit theorem, proved using tools from Malliavin calculus and Stein's Method. This new CLT shows that any collection of Gaussian polynomials with small eigenvalues must have a joint distribution which is very close to a multidimensional Gaussian distribution. A new decomposition of low-degree multilinear polynomials over Gaussian inputs. Roughly speaking we show that (up to some small error) any such polynomial can be decomposed into a bounded number of multilinear polynomials all of which have extremely small eigenvalues. We use these new ingredients to give a deterministic algorithm for a Gaussian-space version of the approximate counting problem, and then employ standard techniques for working with low-degree PTFs (invariance principles and regularity lemmas) to reduce the original approximate counting problem over the Boolean hypercube to the Gaussian version

    Fooling intersections of low-weight halfspaces

    Full text link
    A weight-tt halfspace is a Boolean function f(x)=f(x)=sign(w1x1++wnxnθ)(w_1 x_1 + \cdots + w_n x_n - \theta) where each wiw_i is an integer in {t,,t}.\{-t,\dots,t\}. We give an explicit pseudorandom generator that δ\delta-fools any intersection of kk weight-tt halfspaces with seed length poly(logn,logk,t,1/δ)(\log n, \log k,t,1/\delta). In particular, our result gives an explicit PRG that fools any intersection of any quasipoly(n)(n) number of halfspaces of any polylog(n)\log(n) weight to any 1/1/polylog(n)\log(n) accuracy using seed length polylog(n).\log(n). Prior to this work no explicit PRG with non-trivial seed length was known even for fooling intersections of nn weight-1 halfspaces to constant accuracy. The analysis of our PRG fuses techniques from two different lines of work on unconditional pseudorandomness for different kinds of Boolean functions. We extend the approach of Harsha, Klivans and Meka \cite{HKM12} for fooling intersections of regular halfspaces, and combine this approach with results of Bazzi \cite{Bazzi:07} and Razborov \cite{Razborov:09} on bounded independence fooling CNF formulas. Our analysis introduces new coupling-based ingredients into the standard Lindeberg method for establishing quantitative central limit theorems and associated pseudorandomness results.Comment: 27 page

    Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas

    Get PDF
    We give the best known pseudorandom generators for two touchstone classes in unconditional derandomization: an ε\varepsilon-PRG for the class of size-MM depth-dd AC0\mathsf{AC}^0 circuits with seed length log(M)d+O(1)log(1/ε)\log(M)^{d+O(1)}\cdot \log(1/\varepsilon), and an ε\varepsilon-PRG for the class of SS-sparse F2\mathbb{F}_2 polynomials with seed length 2O(logS)log(1/ε)2^{O(\sqrt{\log S})}\cdot \log(1/\varepsilon). These results bring the state of the art for unconditional derandomization of these classes into sharp alignment with the state of the art for computational hardness for all parameter settings: improving on the seed lengths of either PRG would require breakthrough progress on longstanding and notorious circuit lower bounds. The key enabling ingredient in our approach is a new \emph{pseudorandom multi-switching lemma}. We derandomize recently-developed \emph{multi}-switching lemmas, which are powerful generalizations of H{\aa}stad's switching lemma that deal with \emph{families} of depth-two circuits. Our pseudorandom multi-switching lemma---a randomness-efficient algorithm for sampling restrictions that simultaneously simplify all circuits in a family---achieves the parameters obtained by the (full randomness) multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into the optimality (given current circuit lower bounds) of our PRGs for AC0\mathsf{AC}^0 and sparse F2\mathbb{F}_2 polynomials

    Testing probability distributions using conditional samples

    Full text link
    We study a new framework for property testing of probability distributions, by considering distribution testing algorithms that have access to a conditional sampling oracle.* This is an oracle that takes as input a subset S[N]S \subseteq [N] of the domain [N][N] of the unknown probability distribution DD and returns a draw from the conditional probability distribution DD restricted to SS. This new model allows considerable flexibility in the design of distribution testing algorithms; in particular, testing algorithms in this model can be adaptive. We study a wide range of natural distribution testing problems in this new framework and some of its variants, giving both upper and lower bounds on query complexity. These problems include testing whether DD is the uniform distribution U\mathcal{U}; testing whether D=DD = D^\ast for an explicitly provided DD^\ast; testing whether two unknown distributions D1D_1 and D2D_2 are equivalent; and estimating the variation distance between DD and the uniform distribution. At a high level our main finding is that the new "conditional sampling" framework we consider is a powerful one: while all the problems mentioned above have Ω(N)\Omega(\sqrt{N}) sample complexity in the standard model (and in some cases the complexity must be almost linear in NN), we give poly(logN,1/ε)\mathrm{poly}(\log N, 1/\varepsilon)-query algorithms (and in some cases poly(1/ε)\mathrm{poly}(1/\varepsilon)-query algorithms independent of NN) for all these problems in our conditional sampling setting. *Independently from our work, Chakraborty et al. also considered this framework. We discuss their work in Subsection [1.4].Comment: Significant changes on Section 9 (detailing and expanding the proof of Theorem 16). Several clarifications and typos fixed in various place
    corecore