524 research outputs found

    Quantified Derandomization of Linear Threshold Circuits

    Full text link
    One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC0TC^0, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC0TC^0. In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC0TC^0 circuits of depth d>2d>2. Our first main result is a quantified derandomization algorithm for TC0TC^0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC0TC^0 circuit CC over nn input bits with depth dd and n1+exp(d)n^{1+\exp(-d)} wires, runs in almost-polynomial-time, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs. In fact, our algorithm works even when the circuit CC is a linear threshold circuit, rather than just a TC0TC^0 circuit (i.e., CC is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC0TC^0, and would consequently imply that NEXP⊈TC0NEXP\not\subseteq TC^0. Specifically, if there exists a quantified derandomization algorithm that gets as input a TC0TC^0 circuit with depth dd and n1+O(1/d)n^{1+O(1/d)} wires (rather than n1+exp(d)n^{1+\exp(-d)} wires), runs in time at most 2nexp(d)2^{n^{\exp(-d)}}, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs, then there exists an algorithm with running time 2n1Ω(1)2^{n^{1-\Omega(1)}} for standard derandomization of TC0TC^0.Comment: Changes in this revision: An additional result (a PRG for quantified derandomization of depth-2 LTF circuits); rewrite of some of the exposition; minor correction

    Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas

    Get PDF
    We give the best known pseudorandom generators for two touchstone classes in unconditional derandomization: an ε\varepsilon-PRG for the class of size-MM depth-dd AC0\mathsf{AC}^0 circuits with seed length log(M)d+O(1)log(1/ε)\log(M)^{d+O(1)}\cdot \log(1/\varepsilon), and an ε\varepsilon-PRG for the class of SS-sparse F2\mathbb{F}_2 polynomials with seed length 2O(logS)log(1/ε)2^{O(\sqrt{\log S})}\cdot \log(1/\varepsilon). These results bring the state of the art for unconditional derandomization of these classes into sharp alignment with the state of the art for computational hardness for all parameter settings: improving on the seed lengths of either PRG would require breakthrough progress on longstanding and notorious circuit lower bounds. The key enabling ingredient in our approach is a new \emph{pseudorandom multi-switching lemma}. We derandomize recently-developed \emph{multi}-switching lemmas, which are powerful generalizations of H{\aa}stad's switching lemma that deal with \emph{families} of depth-two circuits. Our pseudorandom multi-switching lemma---a randomness-efficient algorithm for sampling restrictions that simultaneously simplify all circuits in a family---achieves the parameters obtained by the (full randomness) multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into the optimality (given current circuit lower bounds) of our PRGs for AC0\mathsf{AC}^0 and sparse F2\mathbb{F}_2 polynomials

    New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method

    Get PDF
    In this paper, we obtain several new results on lower bounds and derandomization for ACC? circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity): 1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC? verifier (denoted by MA_{ACC?}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length. 2) We show that MA_{ACC?} cannot be computed by fixed-polynomial-size ACC? circuits, and our hard languages are hard on a sufficiently dense set of input lengths. 3) We show that NEXP (nondeterministic exponential-time) does not have ACC? circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC? by [Williams, J. ACM 2014]. Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ? ? ACC? by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC? lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC?. Moreover, since our derandomization of MA_{ACC?} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^?(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^?(1)]) has no poly(n)-size ACC? circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^?(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC? circuits, improving the recent strongly average-case lower bounds for NQP against ACC? by [Chen and Ren, STOC 2020]. One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC?[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC? oracle circuits for implementing these properties

    Pseudo-random graphs and bit probe schemes with one-sided error

    Full text link
    We study probabilistic bit-probe schemes for the membership problem. Given a set A of at most n elements from the universe of size m we organize such a structure that queries of type "Is x in A?" can be answered very quickly. H.Buhrman, P.B.Miltersen, J.Radhakrishnan, and S.Venkatesh proposed a bit-probe scheme based on expanders. Their scheme needs space of O(nlogm)O(n\log m) bits, and requires to read only one randomly chosen bit from the memory to answer a query. The answer is correct with high probability with two-sided errors. In this paper we show that for the same problem there exists a bit-probe scheme with one-sided error that needs space of O(n\log^2 m+\poly(\log m)) bits. The difference with the model of Buhrman, Miltersen, Radhakrishnan, and Venkatesh is that we consider a bit-probe scheme with an auxiliary word. This means that in our scheme the memory is split into two parts of different size: the main storage of O(nlog2m)O(n\log^2 m) bits and a short word of logO(1)m\log^{O(1)}m bits that is pre-computed once for the stored set A and `cached'. To answer a query "Is x in A?" we allow to read the whole cached word and only one bit from the main storage. For some reasonable values of parameters our space bound is better than what can be achieved by any scheme without cached data.Comment: 19 page

    Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

    Get PDF
    What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}]. - ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest

    Randomness in completeness and space-bounded computations

    Get PDF
    The study of computational complexity investigates the role of various computational resources such as processing time, memory requirements, nondeterminism, randomness, nonuniformity, etc. to solve different types of computational problems. In this dissertation, we study the role of randomness in two fundamental areas of computational complexity: NP-completeness and space-bounded computations. The concept of completeness plays an important role in defining the notion of \u27hard\u27 problems in Computer Science. Intuitively, an NP-complete problem captures the difficulty of solving any problem in NP. Polynomial-time reductions are at the heart of defining completeness. However, there is no single notion of reduction; researchers identified various polynomial-time reductions such as many-one reduction, truth-table reduction, Turing reduction, etc. Each such notion of reduction induces a notion of completeness. Finding the relationships among various NP-completeness notions is a significant open problem. Our first result is about the separation of two such polynomial-time completeness notions for NP, namely, Turing completeness and many-one completeness. This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis. Our next result involves a conjecture by Even, Selman, and Yacobi [ESY84,SY82] which states that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. If true, this conjecture implies that a certain kind of probabilistic public-key cryptosystems is not secure. The conjecture is open for 30 years. We provide evidence in support of a variant of this conjecture. We show that if there exist certain secure one-way functions, then the ESY conjecture for the bounded-truth-table reduction holds. Now we turn our attention to space-bounded computations. We investigate probabilistic space-bounded machines that are allowed to access their random bits {\em multiple times}. Our main conceptual contribution here is to establish an interesting connection between derandomization of such probabilistic space-bounded machines and the derandomization of probabilistic time-bounded machines. In particular, we show that if we can derandomize a multipass machine even with a small number of passes over random tape and only O(log^2 n) random bits to deterministic polynomial-time, then BPTIME(n) ⊆ DTIME(2^{o(n)}). Note that if we restrict the number of random bits to O(log n), then we can trivially derandomize the machine to polynomial time. Furthermore, it can be shown that if we restrict the number of passes to O(1), we can still derandomize the machine to polynomial time. Thus our result implies that any extension beyond these trivialities will lead to an unknown derandomization of BPTIME(n). Our final contribution is about the derandomization of probabilistic time-bounded machines under branching program lower bounds. The standard method of derandomizing time-bounded probabilistic machines depends on various circuit lower bounds, which are notoriously hard to prove. We show that the derandomization of low-degree polynomial identity testing, a well-known problem in co-RP, can be obtained under certain branching program lower bounds. Note that branching programs are considered weaker model of computation than the Boolean circuits
    corecore