124 research outputs found

    On the pigeonhole and related principles in deep inference and monotone systems

    Get PDF
    International audienceWe construct quasipolynomial-size proofs of the propositional pigeonhole principle in the deep inference system KS, addressing an open problem raised in previous works and matching the best known upper bound for the more general class of monotone proofs. We make significant use of monotone formulae computing boolean threshold functions, an idea previously considered in works of Atserias et al. The main construction, monotone proofs witnessing the symmetry of such functions, involves an implementation of merge-sort in the design of proofs in order to tame the structural behaviour of atoms, and so the complexity of normalization. Proof transformations from previous work on atomic flows are then employed to yield appropriate KS proofs. As further results we show that our constructions can be applied to provide quasipolynomial-size KS proofs of the parity principle and the generalized pigeonhole principle. These bounds are inherited for the class of monotone proofs, and we are further able to construct n^O(log log n) -size monotone proofs of the weak pigeonhole principle with (1 + ε)n pigeons and n holes for ε = 1/ polylog n, thereby also improving the best known bounds for monotone proofs

    On the relative proof complexity of deep inference via atomic flows

    Get PDF
    We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.Comment: 27 pages, 2 figures, full version of conference pape

    From proof complexity to circuit complexity via interactive protocols

    Get PDF
    Folklore in complexity theory suspects that circuit lower bounds against NC1 or P/poly, currently out of reach, are a necessary step towards proving strong proof complexity lower bounds for systems like Frege or Extended Frege. Establishing such a connection formally, however, is already daunting, as it would imply the breakthrough separation NEXP ⊈ P/poly, as recently observed by Pich and Santhanam [Pich and Santhanam, 2023]. We show such a connection conditionally for the Implicit Extended Frege proof system (iEF) introduced by Krajíček [Krajíček, 2004], capable of formalizing most of contemporary complexity theory. In particular, we show that if iEF proves efficiently the standard derandomization assumption that a concrete Boolean function is hard on average for subexponential-size circuits, then any superpolynomial lower bound on the length of iEF proofs implies #P ⊈ FP/poly (which would in turn imply, for example, PSPACE ⊈ P/poly). Our proof exploits the formalization inside iEF of the soundness of the sum-check protocol of Lund, Fortnow, Karloff, and Nisan [Lund et al., 1992]. This has consequences for the self-provability of circuit upper bounds in iEF. Interestingly, further improving our result seems to require progress in constructing interactive proof systems with more efficient provers

    Extremely Deep Proofs

    Get PDF
    We further the study of supercritical tradeoffs in proof and circuit complexity, which is a type of tradeoff between complexity parameters where restricting one complexity parameter forces another to exceed its worst-case upper bound. In particular, we prove a new family of supercritical tradeoffs between depth and size for Resolution, Res(k), and Cutting Planes proofs. For each of these proof systems we construct, for each c ? n^{1-?}, a formula with n^{O(c)} clauses and n variables that has a proof of size n^{O(c)} but in which any proof of size no more than roughly exponential in n^{1-?}/c must necessarily have depth ? n^c. By setting c = o(n^{1-?}) we therefore obtain exponential lower bounds on proof depth; this far exceeds the trivial worst-case upper bound of n. In doing so we give a simplified proof of a supercritical depth/width tradeoff for tree-like Resolution from [Alexander A. Razborov, 2016]. Finally, we outline several conjectures that would imply similar supercritical tradeoffs between size and depth in circuit complexity via lifting theorems

    On linear rewriting systems for Boolean logic and some applications to proof theory

    Get PDF
    Linear rules have played an increasing role in structural proof theory in recent years. It has been observed that the set of all sound linear inference rules in Boolean logic is already coNP-complete, i.e. that every Boolean tautology can be written as a (left- and right-)linear rewrite rule. In this paper we study properties of systems consisting only of linear inferences. Our main result is that the length of any 'nontrivial' derivation in such a system is bound by a polynomial. As a consequence there is no polynomial-time decidable sound and complete system of linear inferences, unless coNP=NP. We draw tools and concepts from term rewriting, Boolean function theory and graph theory in order to access some required intermediate results. At the same time we make several connections between these areas that, to our knowledge, have not yet been presented and constitute a rich theoretical framework for reasoning about linear TRSs for Boolean logic.Comment: 27 pages, 3 figures, special issue of RTA 201

    On the relative proof complexity of deep inference via atomic flows

    Full text link

    Cancer evolution: mathematical models and computational inference.

    Get PDF
    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.FM would like to acknowledge the support of The University of Cambridge, Cancer Research UK and Hutchison Whampoa Limited.This is the final published version. It first appeared at http://sysbio.oxfordjournals.org/content/early/2014/10/07/sysbio.syu081.short?rss=1

    Complete Randomized Cutting Plane Algorithms for Propositional Satisfiability

    Get PDF
    The propositional satisfiability problem (SAT) is a fundamental problem in computer science and combinatorial optimization. A considerable number of prior researchers have investigated SAT, and much is already known concerning limitations of known algorithms for SAT. In particular, some necessary conditions are known, such that any algorithm not meeting those conditions cannot be efficient. This paper reports a research to develop and test a new algorithm that meets the currently known necessary conditions. In chapter three, we give a new characterization of the convex integer hull of SAT, and two new algorithms for finding strong cutting planes. We also show the importance of choosing which vertex to cut, and present heuristics to find a vertex that allows a strong cutting plane. In chapter four, we describe an experiment to implement a SAT solving algorithm using the new algorithms and heuristics, and to examine their effectiveness on a set of problems. In chapter five, we describe the implementation of the algorithms, and present computational results. For an input SAT problem, the output of the implemented program provides either a witness to the satisfiability or a complete cutting plane proof of satisfiability. The description, implementation, and testing of these algorithms yields both empirical data to characterize the performance of the new algorithms, and additional insight to further advance the theory. We conclude from the computational study that cutting plane algorithms are efficient for the solution of a large class of SAT problems
    corecore