1,143 research outputs found

    Consistency of circuit lower bounds with bounded theories

    Get PDF
    Proving that there are problems in PNP\mathsf{P}^\mathsf{NP} that require boolean circuits of super-linear size is a major frontier in complexity theory. While such lower bounds are known for larger complexity classes, existing results only show that the corresponding problems are hard on infinitely many input lengths. For instance, proving almost-everywhere circuit lower bounds is open even for problems in MAEXP\mathsf{MAEXP}. Giving the notorious difficulty of proving lower bounds that hold for all large input lengths, we ask the following question: Can we show that a large set of techniques cannot prove that NP\mathsf{NP} is easy infinitely often? Motivated by this and related questions about the interaction between mathematical proofs and computations, we investigate circuit complexity from the perspective of logic. Among other results, we prove that for any parameter k1k \geq 1 it is consistent with theory TT that computational class C⊈i.o.SIZE(nk){\mathcal C} \not \subseteq \textit{i.o.}\mathrm{SIZE}(n^k), where (T,C)(T, \mathcal{C}) is one of the pairs: T=T21T = \mathsf{T}^1_2 and C=PNP{\mathcal C} = \mathsf{P}^\mathsf{NP}, T=S21T = \mathsf{S}^1_2 and C=NP{\mathcal C} = \mathsf{NP}, T=PVT = \mathsf{PV} and C=P{\mathcal C} = \mathsf{P}. In other words, these theories cannot establish infinitely often circuit upper bounds for the corresponding problems. This is of interest because the weaker theory PV\mathsf{PV} already formalizes sophisticated arguments, such as a proof of the PCP Theorem. These consistency statements are unconditional and improve on earlier theorems of [KO17] and [BM18] on the consistency of lower bounds with PV\mathsf{PV}

    Hardness magnification for natural problems

    Get PDF
    We show that for several natural problems of interest, complexity lower bounds that are barely non-trivial imply super-polynomial or even exponential lower bounds in strong computational models. We term this phenomenon "hardness magnification". Our examples of hardness magnification include: 1. Let MCSP be the decision problem whose YES instances are truth tables of functions with circuit complexity at most s(n). We show that if MCSP[2^√n] cannot be solved on average with zero error by formulas of linear (or even sub-linear) size, then NP does not have polynomial-size formulas. In contrast, Hirahara and Santhanam (2017) recently showed that MCSP[2^√n] cannot be solved in the worst case by formulas of nearly quadratic size. 2. If there is a c > 0 such that for each positive integer d there is an ε > 0 such that the problem of checking if an n-vertex graph in the adjacency matrix representation has a vertex cover of size (log n)^c cannot be solved by depth-d AC^0 circuits of size m^1+ε, where m = Θ(n^2), then NP does not have polynomial-size formulas. 3. Let (α, β)-MCSP[s] be the promise problem whose YES instances are truth tables of functions that are α-approximable by a circuit of size s(n), and whose NO instances are truth tables of functions that are not β-approximable by a circuit of size s(n). We show that for arbitrary 1/2 ≺ β ≺ α ≤ 1, if (α, β)-MCSP[2^√n] cannot be solved by randomized algorithms with random access to the input running in sublinear time, then NP is not contained in BPP. 4. If for each probabilistic quasi-linear time machine M using poly-logarithmic many random bits that is claimed to solve Satisfiability, there is a deterministic polynomial-time machine that on infinitely many input lengths n either identifies a satisfiable instance of bit-length n on which M does not accept with high probability or an unsatisfiable instance of bit-length n on which M does not reject with high probability, then NEXP is not contained in BPP. 5. Given functions s, c N → N where s ≻ c, let MKtP[c, s] be the promise problem whose YES instances are strings of Kt complexity at most c(N) and NO instances are strings of Kt complexity greater than s(N). We show that if there is a δ ≻ 0 such that for each ε ≻ 0, MKtP[N^ε, N^ε + 5 log(N)] requires Boolean circuits of size N^1+δ, then EXP is not contained in SIZE (poly). For each of the cases of magnification above, we observe that standard hardness assumptions imply much stronger lower bounds for these problems than we require for magnification. We further explore magnification as an avenue to proving strong lower bounds, and argue that magnification circumvents the "natural proofs" barrier of Razborov and Rudich (1997). Examining some standard proof techniques, we find that they fall just short of proving lower bounds via magnification. As one of our main open problems, we ask whether there are other meta-mathematical barriers to proving lower bounds that rule out approache

    A PCP Characterization of AM

    Get PDF
    We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some `Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist hard-on-average optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page

    Easiness Amplification and Uniform Circuit Lower Bounds

    Get PDF
    We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences: * An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false. * Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space. * A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems

    Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds

    Get PDF
    We give a deterministic algorithm for counting the number of satisfying assignments of any AC^0[oplus] circuit C of size s and depth d over n variables in time 2^(n-f(n,s,d)), where f(n,s,d) = n/O(log(s))^(d-1), whenever s = 2^o(n^(1/d)). As a consequence, we get that for each d, there is a language in E^{NP} that does not have AC^0[oplus] circuits of size 2^o(n^(1/(d+1))). This is the first lower bound in E^{NP} against AC^0[oplus] circuits that beats the lower bound of 2^Omega(n^(1/2(d-1))) due to Razborov and Smolensky for large d. Both our algorithm and our lower bounds extend to AC^0[p] circuits for any prime p

    Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials

    Get PDF
    We consider the problem of representing Boolean functions exactly by "sparse" linear combinations (over R\mathbb{R}) of functions from some "simple" class C{\cal C}. In particular, given C{\cal C} we are interested in finding low-complexity functions lacking sparse representations. When C{\cal C} is the set of PARITY functions or the set of conjunctions, this sort of problem has a well-understood answer, the problem becomes interesting when C{\cal C} is "overcomplete" and the set of functions is not linearly independent. We focus on the cases where C{\cal C} is the set of linear threshold functions, the set of rectified linear units (ReLUs), and the set of low-degree polynomials over a finite field, all of which are well-studied in different contexts. We provide generic tools for proving lower bounds on representations of this kind. Applying these, we give several new lower bounds for "semi-explicit" Boolean functions. For example, we show there are functions in nondeterministic quasi-polynomial time that require super-polynomial size: \bullet Depth-two neural networks with sign activation function, a special case of depth-two threshold circuit lower bounds. \bullet Depth-two neural networks with ReLU activation function. \bullet R\mathbb{R}-linear combinations of O(1)O(1)-degree Fp\mathbb{F}_p-polynomials, for every prime pp (related to problems regarding Higher-Order "Uncertainty Principles"). We also obtain a function in ENPE^{NP} requiring 2Ω(n)2^{\Omega(n)} linear combinations. \bullet R\mathbb{R}-linear combinations of ACCTHRACC \circ THR circuits of polynomial size (further generalizing the recent lower bounds of Murray and the author). (The above is a shortened abstract. For the full abstract, see the paper.

    Super-Linear Gate and Super-Quadratic Wire Lower Bounds for Depth-Two and Depth-Three Threshold Circuits

    Full text link
    In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. \bullet We prove that for all ϵlog(n)/n\epsilon\gg \sqrt{\log(n)/n}, the linear-time computable Andreev's function cannot be computed on a (1/2+ϵ)(1/2+\epsilon)-fraction of nn-bit inputs by depth-two linear threshold circuits of o(ϵ3n3/2/log3n)o(\epsilon^3 n^{3/2}/\log^3 n) gates, nor can it be computed with o(ϵ3n5/2/log7/2n)o(\epsilon^{3} n^{5/2}/\log^{7/2} n) wires. This establishes an average-case ``size hierarchy'' for threshold circuits, as Andreev's function is computable by uniform depth-two circuits of o(n3)o(n^3) linear threshold gates, and by uniform depth-three circuits of O(n)O(n) majority gates. \bullet We present a new function in PP based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two linear threshold circuits with o(n3/2/log3n)o(n^{3/2}/\log^3 n) gates, nor with o(n5/2/log7/2n)o(n^{5/2}/\log^{7/2}n) wires. \bullet We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new random restriction lemma for linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics

    Black-Box Constructive Proofs Are Unavoidable

    Get PDF
    Following Razborov and Rudich, a "natural property" for proving a circuit lower bound satisfies three axioms: constructivity, largeness, and usefulness. In 2013, Williams proved that for any reasonable circuit class C, NEXP ? C is equivalent to the existence of a constructive property useful against C. Here, a property is constructive if it can be decided in poly(N) time, where N = 2? is the length of the truth-table of the given n-input function. Recently, Fan, Li, and Yang initiated the study of black-box natural properties, which require a much stronger notion of constructivity, called black-box constructivity: the property should be decidable in randomized polylog(N) time, given oracle access to the n-input function. They showed that most proofs based on random restrictions yield black-box natural properties, and demonstrated limitations on what black-box natural properties can prove. In this paper, perhaps surprisingly, we prove that the equivalence of Williams holds even with this stronger notion of black-box constructivity: for any reasonable circuit class C, NEXP ? C is equivalent to the existence of a black-box constructive property useful against C. The main technical ingredient in proving this equivalence is a smooth, strong, and locally-decodable probabilistically checkable proof (PCP), which we construct based on a recent work by Paradise. As a by-product, we show that average-case witness lower bounds for PCP verifiers follow from NEXP lower bounds. We also show that randomness is essential in the definition of black-box constructivity: we unconditionally prove that there is no deterministic polylog(N)-time constructive property that is useful against even polynomial-size AC? circuits

    Quantified Derandomization of Linear Threshold Circuits

    Full text link
    One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC0TC^0, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC0TC^0. In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC0TC^0 circuits of depth d>2d>2. Our first main result is a quantified derandomization algorithm for TC0TC^0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC0TC^0 circuit CC over nn input bits with depth dd and n1+exp(d)n^{1+\exp(-d)} wires, runs in almost-polynomial-time, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs. In fact, our algorithm works even when the circuit CC is a linear threshold circuit, rather than just a TC0TC^0 circuit (i.e., CC is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC0TC^0, and would consequently imply that NEXP⊈TC0NEXP\not\subseteq TC^0. Specifically, if there exists a quantified derandomization algorithm that gets as input a TC0TC^0 circuit with depth dd and n1+O(1/d)n^{1+O(1/d)} wires (rather than n1+exp(d)n^{1+\exp(-d)} wires), runs in time at most 2nexp(d)2^{n^{\exp(-d)}}, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs, then there exists an algorithm with running time 2n1Ω(1)2^{n^{1-\Omega(1)}} for standard derandomization of TC0TC^0.Comment: Changes in this revision: An additional result (a PRG for quantified derandomization of depth-2 LTF circuits); rewrite of some of the exposition; minor correction
    corecore