327 research outputs found

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)

    Get PDF
    Computational pseudorandomness studies the extent to which a random variable Z\bf{Z} looks like the uniform distribution according to a class of tests F\cal{F}. Computational entropy generalizes computational pseudorandomness by studying the extent which a random variable looks like a \emph{high entropy} distribution. There are different formal definitions of computational entropy with different advantages for different applications. Because of this, it is of interest to understand when these definitions are equivalent. We consider three notions of computational entropy which are known to be equivalent when the test class F\cal{F} is closed under taking majorities. This equivalence constitutes (essentially) the so-called \emph{dense model theorem} of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et al., and Gowers). The dense model theorem plays a key role in Green and Tao's proof that the primes contain arbitrarily long arithmetic progressions and has since been connected to a surprisingly wide range of topics in mathematics and computer science, including cryptography, computational complexity, combinatorics and machine learning. We show that, in different situations where F\cal{F} is \emph{not} closed under majority, this equivalence fails. This in turn provides examples where the dense model theorem is \emph{false}.Comment: 19 pages; to appear in ITCS 202

    Algorithms and lower bounds for de Morgan formulas of low-communication leaf gates

    Get PDF
    The class FORMULA[s]GFORMULA[s] \circ \mathcal{G} consists of Boolean functions computable by size-ss de Morgan formulas whose leaves are any Boolean functions from a class G\mathcal{G}. We give lower bounds and (SAT, Learning, and PRG) algorithms for FORMULA[n1.99]GFORMULA[n^{1.99}]\circ \mathcal{G}, for classes G\mathcal{G} of functions with low communication complexity. Let R(k)(G)R^{(k)}(\mathcal{G}) be the maximum kk-party NOF randomized communication complexity of G\mathcal{G}. We show: (1) The Generalized Inner Product function GIPnkGIP^k_n cannot be computed in FORMULA[s]GFORMULA[s]\circ \mathcal{G} on more than 1/2+ε1/2+\varepsilon fraction of inputs for s=o ⁣(n2(k4kR(k)(G)log(n/ε)log(1/ε))2). s = o \! \left ( \frac{n^2}{ \left(k \cdot 4^k \cdot {R}^{(k)}(\mathcal{G}) \cdot \log (n/\varepsilon) \cdot \log(1/\varepsilon) \right)^{2}} \right). As a corollary, we get an average-case lower bound for GIPnkGIP^k_n against FORMULA[n1.99]PTFk1FORMULA[n^{1.99}]\circ PTF^{k-1}. (2) There is a PRG of seed length n/2+O(sR(2)(G)log(s/ε)log(1/ε))n/2 + O\left(\sqrt{s} \cdot R^{(2)}(\mathcal{G}) \cdot\log(s/\varepsilon) \cdot \log (1/\varepsilon) \right) that ε\varepsilon-fools FORMULA[s]GFORMULA[s] \circ \mathcal{G}. For FORMULA[s]LTFFORMULA[s] \circ LTF, we get the better seed length O(n1/2s1/4log(n)log(n/ε))O\left(n^{1/2}\cdot s^{1/4}\cdot \log(n)\cdot \log(n/\varepsilon)\right). This gives the first non-trivial PRG (with seed length o(n)o(n)) for intersections of nn half-spaces in the regime where ε1/n\varepsilon \leq 1/n. (3) There is a randomized 2nt2^{n-t}-time #\#SAT algorithm for FORMULA[s]GFORMULA[s] \circ \mathcal{G}, where t=Ω(nslog2(s)R(2)(G))1/2.t=\Omega\left(\frac{n}{\sqrt{s}\cdot\log^2(s)\cdot R^{(2)}(\mathcal{G})}\right)^{1/2}. In particular, this implies a nontrivial #SAT algorithm for FORMULA[n1.99]LTFFORMULA[n^{1.99}]\circ LTF. (4) The Minimum Circuit Size Problem is not in FORMULA[n1.99]XORFORMULA[n^{1.99}]\circ XOR. On the algorithmic side, we show that FORMULA[n1.99]XORFORMULA[n^{1.99}] \circ XOR can be PAC-learned in time 2O(n/logn)2^{O(n/\log n)}

    Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size ≤ s(n). We show: Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1 and ε > 0 the class C[n k ] can be learned to high accuracy in time O(2n ε ). There is ε > 0 such that C[2n ε ] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n) O(1) . Equivalences between Learning Models. We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n k ] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n k ]. If for some ε > 0 there are P-natural proofs useful against C[2n ε ], then ZPEXP * C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆ i.o.Circuit[n k ], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3 ], then ZPEXP ⊆ i.o.ESUBEXP. Hardness Results for MCSP. All functions in non-uniform NC1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC0 circuits. In particular, if MCSP ∈ TC0 then NC1 = TC0

    Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show: Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}. Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP. Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0

    Pseudorandom generators and the BQP vs. PH problem

    Get PDF
    It is a longstanding open problem to devise an oracle relative to which BQP does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural conjecture about the capacity of the Nisan-Wigderson pseudorandom generator [NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is essentially that the loss due to the hybrid argument (which is a component of the standard proof from [NW94]) can be avoided in this setting. This is a question that has been asked previously in the pseudorandomness literature [BSW03]. We then make three main contributions: (1) We show that our conjecture implies the existence of an oracle relative to which BQP is not in the PH. This entails giving an explicit construction of unitary matrices, realizable by small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a simple framework (generalizing the setting of Aaronson [A10]) in which any efficiently quantumly computable unitary gives rise to a distribution that can be distinguished from the uniform distribution by an efficient quantum algorithm. When applied to the unitaries we construct, this framework yields a problem that can be solved quantumly, and which forms the basis for the desired oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our conjecture; our conjecture is thus formally easier to prove. The GLN conjecture was recently proved false for depth greater than 2 [A10a], but it remains open for depth 2. If true, the depth-2 version of either conjecture would imply an oracle relative to which BQP is not in AM, which is itself an outstanding open problem. Taken together, our results have the following interesting interpretation: they give an instantiation of the Nisan-Wigderson generator that can be broken by quantum computers, but not by the relevant modes of classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur

    An average-case lower bound against ACC0

    Get PDF
    In a seminal work, Williams [22] showed that NEXP (nondeterministic exponential time) does not have polynomial-size ACC0 circuits. Williams’ technique inherently gives a worst-case lower bound, and until now, no average-case version of his result was known. We show that there is a language L in NEXP and a function ε(n)=1/ log(n) ω(1) such that no sequence of polynomial size ACC0 circuits solves L on more than a 1/2+ε(n) fraction of inputs of length n for all large enough n. Complementing this result, we give a nontrivial pseudo-random generator against polynomial-size AC0[6] circuits. We also show that learning algorithms for quasi-polynomial size ACC0 circuits running in time 2n/nω(1) imply lower bounds for the randomised exponential time classes RE (randomized time 2O(n) with one-sided error) and ZPE/1 (zero-error randomized time 2O(n) with 1 bit of advice) against polynomial size ACC0 circuits. This strengthens results of Oliveira and Santhanam [15]
    corecore