115 research outputs found

    Satisfiability and Derandomization for Small Polynomial Threshold Circuits

    Get PDF
    A polynomial threshold function (PTF) is defined as the sign of a polynomial p : {0,1}^n ->R. A PTF circuit is a Boolean circuit whose gates are PTFs. We study the problems of exact and (promise) approximate counting for PTF circuits of constant depth. - Satisfiability (#SAT). We give the first zero-error randomized algorithm faster than exhaustive search that counts the number of satisfying assignments of a given constant-depth circuit with a super-linear number of wires whose gates are s-sparse PTFs, for s almost quadratic in the input size of the circuit; here a PTF is called s-sparse if its underlying polynomial has at most s monomials. More specifically, we show that, for any large enough constant c, given a depth-d circuit with (n^{2-1/c})-sparse PTF gates that has at most n^{1+epsilon_d} wires, where epsilon_d depends only on c and d, the number of satisfying assignments of the circuit can be computed in randomized time 2^{n-n^{epsilon_d}} with zero error. This generalizes the result by Chen, Santhanam and Srinivasan (CCC, 2016) who gave a SAT algorithm for constant-depth circuits of super-linear wire complexity with linear threshold function (LTF) gates only. - Quantified derandomization. The quantified derandomization problem, introduced by Goldreich and Wigderson (STOC, 2014), asks to compute the majority value of a given Boolean circuit, under the promise that the minority-value inputs to the circuit are very few. We give a quantified derandomization algorithm for constant-depth PTF circuits with a super-linear number of wires that runs in quasi-polynomial time. More specifically, we show that for any sufficiently large constant c, there is an algorithm that, given a degree-Delta PTF circuit C of depth d with n^{1+1/c^d} wires such that C has at most 2^{n^{1-1/c}} minority-value inputs, runs in quasi-polynomial time exp ((log n)^{O (Delta^2)}) and determines the majority value of C. (We obtain a similar quantified derandomization result for PTF circuits with n^{Delta}-sparse PTF gates.) This extends the recent result of Tell (STOC, 2018) for constant-depth LTF circuits of super-linear wire complexity. - Pseudorandom generators. We show how the classical Nisan-Wigderson (NW) generator (JCSS, 1994) yields a nontrivial pseudorandom generator for PTF circuits (of unrestricted depth) with sub-linearly many gates. As a corollary, we get a PRG for degree-Delta PTFs with the seed length exp (sqrt{Delta * log n})* log^2(1/epsilon)

    Algorithms and Lower Bounds in Circuit Complexity

    Get PDF
    Computational complexity theory aims to understand what problems can be efficiently solved by computation. This thesis studies computational complexity in the model of Boolean circuits. Boolean circuits provide a basic mathematical model for computation and play a central role in complexity theory, with important applications in separations of complexity classes, algorithm design, and pseudorandom constructions. In this thesis, we investigate various types of circuit models such as threshold circuits, Boolean formulas, and their extensions, focusing on obtaining complexity-theoretic lower bounds and algorithmic upper bounds for these circuits. (1) Algorithms and lower bounds for generalized threshold circuits: We extend the study of linear threshold circuits, circuits with gates computing linear threshold functions, to the more powerful model of polynomial threshold circuits where the gates can compute polynomial threshold functions. We obtain hardness and meta-algorithmic results for this circuit model, including strong average-case lower bounds, satisfiability algorithms, and derandomization algorithms for constant-depth polynomial threshold circuits with super-linear wire complexity. (2) Algorithms and lower bounds for enhanced formulas: We investigate the model of Boolean formulas whose leaf gates can compute complex functions. In particular, we study De Morgan formulas whose leaf gates are functions with "low communication complexity". Such gates can capture a broad class of functions including symmetric functions and polynomial threshold functions. We obtain new and improved results in terms of lower bounds and meta-algorithms (satisfiability, derandomization, and learning) for such enhanced formulas. (3) Circuit lower bounds for MCSP: We study circuit lower bounds for the Minimum Circuit Size Problem (MCSP), the fundamental problem of deciding whether a given function (in the form of a truth table) can be computed by small circuits. We get new and improved lower bounds for MCSP that nearly match the best-known lower bounds against several well-studied circuit models such as Boolean formulas and constant-depth circuits

    Quantified Derandomization of Linear Threshold Circuits

    Full text link
    One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC0TC^0, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC0TC^0. In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC0TC^0 circuits of depth d>2d>2. Our first main result is a quantified derandomization algorithm for TC0TC^0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC0TC^0 circuit CC over nn input bits with depth dd and n1+exp(d)n^{1+\exp(-d)} wires, runs in almost-polynomial-time, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs. In fact, our algorithm works even when the circuit CC is a linear threshold circuit, rather than just a TC0TC^0 circuit (i.e., CC is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC0TC^0, and would consequently imply that NEXP⊈TC0NEXP\not\subseteq TC^0. Specifically, if there exists a quantified derandomization algorithm that gets as input a TC0TC^0 circuit with depth dd and n1+O(1/d)n^{1+O(1/d)} wires (rather than n1+exp(d)n^{1+\exp(-d)} wires), runs in time at most 2nexp(d)2^{n^{\exp(-d)}}, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs, then there exists an algorithm with running time 2n1Ω(1)2^{n^{1-\Omega(1)}} for standard derandomization of TC0TC^0.Comment: Changes in this revision: An additional result (a PRG for quantified derandomization of depth-2 LTF circuits); rewrite of some of the exposition; minor correction

    An Algorithmic Approach to Uniform Lower Bounds

    Get PDF

    Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds

    Get PDF
    We give a deterministic algorithm for counting the number of satisfying assignments of any AC^0[oplus] circuit C of size s and depth d over n variables in time 2^(n-f(n,s,d)), where f(n,s,d) = n/O(log(s))^(d-1), whenever s = 2^o(n^(1/d)). As a consequence, we get that for each d, there is a language in E^{NP} that does not have AC^0[oplus] circuits of size 2^o(n^(1/(d+1))). This is the first lower bound in E^{NP} against AC^0[oplus] circuits that beats the lower bound of 2^Omega(n^(1/2(d-1))) due to Razborov and Smolensky for large d. Both our algorithm and our lower bounds extend to AC^0[p] circuits for any prime p

    Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity

    Get PDF
    We considerably sharpen the known connections between circuit-analysis algorithms and circuit lower bounds, show intriguing equivalences between the analysis of weak circuits and (apparently difficult) circuits, and provide strong new lower bounds for approximately computing Boolean functions with depth-two neural networks and related models. - We develop approaches to proving THR o THR lower bounds (a notorious open problem), by connecting algorithmic analysis of THR o THR to the provably weaker circuit classes THR o MAJ and MAJ o MAJ, where exponential lower bounds have long been known. More precisely, we show equivalences between algorithmic analysis of THR o THR and these weaker classes. The epsilon-error CAPP problem asks to approximate the acceptance probability of a given circuit to within additive error epsilon; it is the "canonical" derandomization problem. We show: - There is a non-trivial (2^n/n^{omega(1)} time) 1/poly(n)-error CAPP algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size MAJ o MAJ. - There is a delta > 0 and a non-trivial SAT (delta-error CAPP) algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size THR o MAJ. Similar results hold for depth-d linear threshold circuits and depth-d MAJORITY circuits. These equivalences are proved via new simulations of THR circuits by circuits with MAJ gates. - We strengthen the connection between non-trivial derandomization (non-trivial CAPP algorithms) for a circuit class C, and circuit lower bounds against C. Previously, [Ben-Sasson and Viola, ICALP 2014] (following [Williams, STOC 2010]) showed that for any polynomial-size class C closed under projections, non-trivial (2^{n}/n^{omega(1)} time) CAPP for OR_{poly(n)} o AND_{3} o C yields NEXP does not have C circuits. We apply Probabilistic Checkable Proofs of Proximity in a new way to show it would suffice to have a non-trivial CAPP algorithm for either XOR_2 o C, AND_2 o C or OR_2 o C. - A direct corollary of the first two bullets is that NEXP does not have THR o THR circuits would follow from either: - a non-trivial delta-error CAPP (or SAT) algorithm for poly(n)-size THR o MAJ circuits, or - a non-trivial 1/poly(n)-error CAPP algorithm for poly(n)-size MAJ o MAJ circuits. - Applying the above machinery, we extend lower bounds for depth-two neural networks and related models [R. Williams, CCC 2018] to weak approximate computations of Boolean functions. For example, for arbitrarily small epsilon > 0, we prove there are Boolean functions f computable in nondeterministic n^{log n} time such that (for infinitely many n) every polynomial-size depth-two neural network N on n inputs (with sign or ReLU activation) must satisfy max_{x in {0,1}^n}|N(x)-f(x)|>1/2-epsilon. That is, short linear combinations of ReLU gates fail miserably at computing f to within close precision. Similar results are proved for linear combinations of ACC o THR circuits, and linear combinations of low-degree F_p polynomials. These results constitute further progress towards THR o THR lower bounds

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
    corecore