49 research outputs found

    The weak pigeonhole principle for function classes in S^1_2

    Full text link
    It is well known that S^1_2 cannot prove the injective weak pigeonhole principle for polynomial time functions unless RSA is insecure. In this note we investigate the provability of the surjective (dual) weak pigeonhole principle in S^1_2 for provably weaker function classes.Comment: 11 page

    Unprovability of circuit upper bounds in Cook's theory PV

    Get PDF
    We establish unconditionally that for every integer k1k \geq 1 there is a language L \in \mbox{P} such that it is consistent with Cook's theory PV that LSize(nk)L \notin Size(n^k). Our argument is non-constructive and does not provide an explicit description of this language

    Feasibly constructive proofs of succinct weak circuit lower bounds

    Get PDF
    We ask for feasibly constructive proofs of known circuit lower bounds for explicit functions on bit strings of length n. In 1995 Razborov showed that many can be proved in PV1, a bounded arithmetic formalizing polynomial time reasoning. He formalized circuit lower bound statements for small n of doubly logarithmic order. It is open whether PV1 proves known lower bounds in succinct formalizations for n of logarithmic order. We give such proofs in APC1, an extension of PV1 formalizing probabilistic polynomial time reasoning: for parity and AC0, for mod q and AC0[p] (only for n slightly smaller than logarithmic), and for k-clique and monotone circuits. We also formalize Razborov and Rudich’s natural proof barrier. We ask for short propositional proofs of circuit lower bounds expressed succinctly by propositional formulas of size nO(1) or at least much smaller than the 2O(n) size of the common “truth table” formula. We discuss two such expressions: one via feasible functions witnessing errors of circuits, and one via the anticheckers of Lipton and Young 1994. Our APC1 formalizations yield conditional upper bounds for the succinct formulas obtained by witnessing: we get short Extended Frege proofs from general circuit lower bounds expressed by the common “truth-table” formulas. We also show how to construct in quasipolynomial time propositional proofs of quasipolynomial size tautologies expressing AC0[p] quasipolynomial size lower bounds; these proofs are in Jerábek’s system WF.Peer ReviewedPostprint (author's final draft

    On the existence of strong proof complexity generators

    Full text link
    Cook and Reckhow 1979 pointed out that NP is not closed under complementation iff there is no propositional proof system that admits polynomial size proofs of all tautologies. Theory of proof complexity generators aims at constructing sets of tautologies hard for strong and possibly for all proof systems. We focus at a conjecture from K.2004 in foundations of the theory that there is a proof complexity generator hard for all proof systems. This can be equivalently formulated (for p-time generators) without a reference to proof complexity notions as follows: * There exist a p-time function gg stretching each input by one bit such that its range intersects all infinite NP sets. We consider several facets of this conjecture, including its links to bounded arithmetic (witnessing and independence results), to time-bounded Kolmogorov complexity, to feasible disjunction property of propositional proof systems and to complexity of proof search. We argue that a specific gadget generator from K.2009 is a good candidate for gg. We define a new hardness property of generators, the \bigvee-hardness, and shows that one specific gadget generator is the \bigvee-hardest (w.r.t. any sufficiently strong proof system). We define the class of feasibly infinite NP sets and show, assuming a hypothesis from circuit complexity, that the conjecture holds for all feasibly infinite NP sets.Comment: preliminary version August 2022, revised July 202

    Hardness magnification near state-of-the-art lower bounds

    Get PDF
    This work continues the development of hardness magnification. The latter proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful. We consider gap versions of the meta-computational problems MKtP and MCSP, where one needs to distinguish instances (strings or truth-tables) of complexity = s_2(N), and N = 2^n denotes the input length. In MCSP, complexity is measured by circuit size, while in MKtP one considers Levin's notion of time-bounded Kolmogorov complexity. (In our results, the parameters s_1(N) and s_2(N) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for Gap-MKtP[s_1,s_2] and Gap-MCSP[s_1,s_2], a marginal improvement over the state-of-the-art in unconditional lower bounds in a variety of computational models would imply explicit super-polynomial lower bounds. Theorem. There exists a universal constant c >= 1 for which the following hold. If there exists epsilon > 0 such that for every small enough beta > 0 (1) Gap-MCSP[2^{beta n}/c n, 2^{beta n}] !in Circuit[N^{1 + epsilon}], then NP !subseteq Circuit[poly]. (2) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in TC^0[N^{1 + epsilon}], then EXP !subseteq TC^0[poly]. (3) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in B_2-Formula[N^{2 + epsilon}], then EXP !subseteq Formula[poly]. (4) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in U_2-Formula[N^{3 + epsilon}], then EXP !subseteq Formula[poly]. (5) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in BP[N^{2 + epsilon}], then EXP !subseteq BP[poly]. (6) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in (AC^0[6])[N^{1 + epsilon}], then EXP !subseteq AC^0[6]. These results are complemented by lower bounds for Gap-MCSP and Gap-MKtP against different models. For instance, the lower bound assumed in (1) holds for U_2-formulas of near-quadratic size, and lower bounds similar to (3)-(5) hold for various regimes of parameters. We also identify a natural computational model under which the hardness magnification threshold for Gap-MKtP lies below existing lower bounds: U_2-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with Gap-MKtP, then EXP !subseteq NC^1 would follow via hardness magnification

    The Journey from NP to TFNP Hardness

    Get PDF
    The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs
    corecore