281 research outputs found

    Easiness Amplification and Uniform Circuit Lower Bounds

    Get PDF
    We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences: * An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false. * Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space. * A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems

    Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

    Get PDF
    What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}]. - ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest

    Pseudodeterministic constructions in subexponential time

    Get PDF
    We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn}n∈N of increasing primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a much more general theorem about pseudodeterministic constructions. A property Q ⊆ {0, 1}* is γ-dense if for large enough n, |Q ⋂ {0, 1}n| ≥ γ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0, 1}n, such that for each (1=nc)-dense property Q ∈ DTIME(n^c) and every large enough n, Hn ⋂ Q ≠ ∅; or (2) There is a deterministic sub-exponential time construction of a family {H'n} of sets, H'n ⊆ {0, 1}n, such that for each (1/n^c)-dense property Q ∈ DTIME(n^c) and for infinitely many values of n, H'n ⋂ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.</p

    Some Results on Average-Case Hardness Within the Polynomial Hierarchy

    Get PDF
    Abstract. We prove several results about the average-case complexity of problems in the Polynomial Hierarchy (PH). We give a connection among average-case, worst-case, and non-uniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worst-case then it is either hard on the average (in the sense of Levin) or it is non-uniformly hard (i.e. it does not have small circuits). Recently, Gutfreund, Shaltiel and Ta-Shma (IEEE Conference on Computational Complexity, 2005) showed an interesting worst-case to averagecase connection for languages in NP, under a notion of average-case hardness defined using uniform adversaries. We show that extending their connection to hardness against quasi-polynomial time would imply that NEXP doesn’t have polynomial-size circuits. Finally we prove an unconditional average-case hardness result. We show that for each k, there is an explicit language in P Σ2 which is hard on average for circuits of size n k.

    Does Looking Inside a Circuit Help?

    Get PDF
    The Black-Box Hypothesisstates that any property of Boolean functions decided efficiently (e.g., in BPP) with inputs represented by circuits can also be decided efficiently in the black-box setting, where an algorithm is given an oracle access to the input function and an upper bound on its circuit size. If this hypothesis is true, then P neq NP. We focus on the consequences of the hypothesis being false, showing that (under general conditions on the structure of a counterexample) it implies a non-trivial algorithm for CSAT. More specifically, we show that if there is a property F of boolean functions such that F has high sensitivity on some input function f of subexponential circuit complexity (which is a sufficient condition for F being a counterexample to the Black-Box Hypothesis), then CSAT is solvable by a subexponential-size circuit family. Moreover, if such a counterexample F is symmetric, then CSAT is in Ppoly. These results provide some evidence towards the conjecture (made in this paper) that the Black-Box Hypothesis is false if and only if CSAT is easy

    Hardness magnification for natural problems

    Get PDF
    We show that for several natural problems of interest, complexity lower bounds that are barely non-trivial imply super-polynomial or even exponential lower bounds in strong computational models. We term this phenomenon "hardness magnification". Our examples of hardness magnification include: 1. Let MCSP be the decision problem whose YES instances are truth tables of functions with circuit complexity at most s(n). We show that if MCSP[2^√n] cannot be solved on average with zero error by formulas of linear (or even sub-linear) size, then NP does not have polynomial-size formulas. In contrast, Hirahara and Santhanam (2017) recently showed that MCSP[2^√n] cannot be solved in the worst case by formulas of nearly quadratic size. 2. If there is a c > 0 such that for each positive integer d there is an ε > 0 such that the problem of checking if an n-vertex graph in the adjacency matrix representation has a vertex cover of size (log n)^c cannot be solved by depth-d AC^0 circuits of size m^1+ε, where m = Θ(n^2), then NP does not have polynomial-size formulas. 3. Let (α, β)-MCSP[s] be the promise problem whose YES instances are truth tables of functions that are α-approximable by a circuit of size s(n), and whose NO instances are truth tables of functions that are not β-approximable by a circuit of size s(n). We show that for arbitrary 1/2 ≺ β ≺ α ≤ 1, if (α, β)-MCSP[2^√n] cannot be solved by randomized algorithms with random access to the input running in sublinear time, then NP is not contained in BPP. 4. If for each probabilistic quasi-linear time machine M using poly-logarithmic many random bits that is claimed to solve Satisfiability, there is a deterministic polynomial-time machine that on infinitely many input lengths n either identifies a satisfiable instance of bit-length n on which M does not accept with high probability or an unsatisfiable instance of bit-length n on which M does not reject with high probability, then NEXP is not contained in BPP. 5. Given functions s, c N → N where s ≻ c, let MKtP[c, s] be the promise problem whose YES instances are strings of Kt complexity at most c(N) and NO instances are strings of Kt complexity greater than s(N). We show that if there is a δ ≻ 0 such that for each ε ≻ 0, MKtP[N^ε, N^ε + 5 log(N)] requires Boolean circuits of size N^1+δ, then EXP is not contained in SIZE (poly). For each of the cases of magnification above, we observe that standard hardness assumptions imply much stronger lower bounds for these problems than we require for magnification. We further explore magnification as an avenue to proving strong lower bounds, and argue that magnification circumvents the "natural proofs" barrier of Razborov and Rudich (1997). Examining some standard proof techniques, we find that they fall just short of proving lower bounds via magnification. As one of our main open problems, we ask whether there are other meta-mathematical barriers to proving lower bounds that rule out approache

    Improved Learning from Kolmogorov Complexity

    Get PDF
    Carmosino, Impagliazzo, Kabanets, and Kolokolova (CCC, 2016) showed that the existence of natural properties in the sense of Razborov and Rudich (JCSS, 1997) implies PAC learning algorithms in the sense of Valiant (Comm. ACM, 1984), for boolean functions in P/poly, under the uniform distribution and with membership queries. It is still an open problem to get from natural properties learning algorithms that do not rely on membership queries but rather use randomly drawn labeled examples. Natural properties may be understood as an average-case version of MCSP, the problem of deciding the minimum size of a circuit computing a given truth-table. Problems related to MCSP include those concerning time-bounded Kolmogorov complexity. MKTP, for example, asks for the KT-complexity of a given string. KT-complexity is a relaxation of circuit size, as it does away with the requirement that a short description of a string be interpreted as a boolean circuit. In this work, under assumptions of MKTP and the related problem MK^tP being easy on average, we get learning algorithms for boolean functions in P/poly that - work over any distribution D samplable by a family of polynomial-size circuits (given explicitly in the case of MKTP), - only use randomly drawn labeled examples from D, and - are agnostic (do not require the target function to belong to the hypothesis class). Our results build upon the recent work of Hirahara and Nanashima (FOCS, 2021) who showed similar learning consequences but under a stronger assumption that NP is easy on average

    Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size ≤ s(n). We show: Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1 and ε > 0 the class C[n k ] can be learned to high accuracy in time O(2n ε ). There is ε > 0 such that C[2n ε ] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n) O(1) . Equivalences between Learning Models. We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n k ] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n k ]. If for some ε > 0 there are P-natural proofs useful against C[2n ε ], then ZPEXP * C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆ i.o.Circuit[n k ], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3 ], then ZPEXP ⊆ i.o.ESUBEXP. Hardness Results for MCSP. All functions in non-uniform NC1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC0 circuits. In particular, if MCSP ∈ TC0 then NC1 = TC0
    • …
    corecore