3,368 research outputs found

    Consistency of circuit lower bounds with bounded theories

    Get PDF
    Proving that there are problems in PNP\mathsf{P}^\mathsf{NP} that require boolean circuits of super-linear size is a major frontier in complexity theory. While such lower bounds are known for larger complexity classes, existing results only show that the corresponding problems are hard on infinitely many input lengths. For instance, proving almost-everywhere circuit lower bounds is open even for problems in MAEXP\mathsf{MAEXP}. Giving the notorious difficulty of proving lower bounds that hold for all large input lengths, we ask the following question: Can we show that a large set of techniques cannot prove that NP\mathsf{NP} is easy infinitely often? Motivated by this and related questions about the interaction between mathematical proofs and computations, we investigate circuit complexity from the perspective of logic. Among other results, we prove that for any parameter k1k \geq 1 it is consistent with theory TT that computational class C⊈i.o.SIZE(nk){\mathcal C} \not \subseteq \textit{i.o.}\mathrm{SIZE}(n^k), where (T,C)(T, \mathcal{C}) is one of the pairs: T=T21T = \mathsf{T}^1_2 and C=PNP{\mathcal C} = \mathsf{P}^\mathsf{NP}, T=S21T = \mathsf{S}^1_2 and C=NP{\mathcal C} = \mathsf{NP}, T=PVT = \mathsf{PV} and C=P{\mathcal C} = \mathsf{P}. In other words, these theories cannot establish infinitely often circuit upper bounds for the corresponding problems. This is of interest because the weaker theory PV\mathsf{PV} already formalizes sophisticated arguments, such as a proof of the PCP Theorem. These consistency statements are unconditional and improve on earlier theorems of [KO17] and [BM18] on the consistency of lower bounds with PV\mathsf{PV}

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Bit-Vector Model Counting using Statistical Estimation

    Full text link
    Approximate model counting for bit-vector SMT formulas (generalizing \#SAT) has many applications such as probabilistic inference and quantitative information-flow security, but it is computationally difficult. Adding random parity constraints (XOR streamlining) and then checking satisfiability is an effective approximation technique, but it requires a prior hypothesis about the model count to produce useful results. We propose an approach inspired by statistical estimation to continually refine a probabilistic estimate of the model count for a formula, so that each XOR-streamlined query yields as much information as possible. We implement this approach, with an approximate probability model, as a wrapper around an off-the-shelf SMT solver or SAT solver. Experimental results show that the implementation is faster than the most similar previous approaches which used simpler refinement strategies. The technique also lets us model count formulas over floating-point constraints, which we demonstrate with an application to a vulnerability in differential privacy mechanisms

    Parameterized complexity of DPLL search procedures

    Get PDF
    We study the performance of DPLL algorithms on parameterized problems. In particular, we investigate how difficult it is to decide whether small solutions exist for satisfiability and other combinatorial problems. For this purpose we develop a Prover-Delayer game which models the running time of DPLL procedures and we establish an information-theoretic method to obtain lower bounds to the running time of parameterized DPLL procedures. We illustrate this technique by showing lower bounds to the parameterized pigeonhole principle and to the ordering principle. As our main application we study the DPLL procedure for the problem of deciding whether a graph has a small clique. We show that proving the absence of a k-clique requires n steps for a non-trivial distribution of graphs close to the critical threshold. For the restricted case of tree-like Parameterized Resolution, this result answers a question asked in [11] of understanding the Resolution complexity of this family of formulas

    Oracle Complexity Classes and Local Measurements on Physical Hamiltonians

    Get PDF
    The canonical problem for the class Quantum Merlin-Arthur (QMA) is that of estimating ground state energies of local Hamiltonians. Perhaps surprisingly, [Ambainis, CCC 2014] showed that the related, but arguably more natural, problem of simulating local measurements on ground states of local Hamiltonians (APX-SIM) is likely harder than QMA. Indeed, [Ambainis, CCC 2014] showed that APX-SIM is P^QMA[log]-complete, for P^QMA[log] the class of languages decidable by a P machine making a logarithmic number of adaptive queries to a QMA oracle. In this work, we show that APX-SIM is P^QMA[log]-complete even when restricted to more physical Hamiltonians, obtaining as intermediate steps a variety of related complexity-theoretic results. We first give a sequence of results which together yield P^QMA[log]-hardness for APX-SIM on well-motivated Hamiltonians: (1) We show that for NP, StoqMA, and QMA oracles, a logarithmic number of adaptive queries is equivalent to polynomially many parallel queries. These equalities simplify the proofs of our subsequent results. (2) Next, we show that the hardness of APX-SIM is preserved under Hamiltonian simulations (a la [Cubitt, Montanaro, Piddock, 2017]). As a byproduct, we obtain a full complexity classification of APX-SIM, showing it is complete for P, P^||NP, P^||StoqMA, or P^||QMA depending on the Hamiltonians employed. (3) Leveraging the above, we show that APX-SIM is P^QMA[log]-complete for any family of Hamiltonians which can efficiently simulate spatially sparse Hamiltonians, including physically motivated models such as the 2D Heisenberg model. Our second focus considers 1D systems: We show that APX-SIM remains P^QMA[log]-complete even for local Hamiltonians on a 1D line of 8-dimensional qudits. This uses a number of ideas from above, along with replacing the "query Hamiltonian" of [Ambainis, CCC 2014] with a new "sifter" construction.Comment: 38 pages, 3 figure
    corecore