433 research outputs found

    Pseudorandomness and Average-Case Complexity Via Uniform Reductions

    Full text link

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size ≤ s(n). We show: Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1 and ε > 0 the class C[n k ] can be learned to high accuracy in time O(2n ε ). There is ε > 0 such that C[2n ε ] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n) O(1) . Equivalences between Learning Models. We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n k ] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n k ]. If for some ε > 0 there are P-natural proofs useful against C[2n ε ], then ZPEXP * C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆ i.o.Circuit[n k ], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3 ], then ZPEXP ⊆ i.o.ESUBEXP. Hardness Results for MCSP. All functions in non-uniform NC1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC0 circuits. In particular, if MCSP ∈ TC0 then NC1 = TC0

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
    corecore