38 research outputs found

    Circuit complexity, proof complexity, and polynomial identity testing

    Full text link
    We introduce a new algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP is not equal to VP). As a corollary to the proof, we also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs (as opposed to the usual measure of number of monomials) imply the Permanent versus Determinant Conjecture. Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied any computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complexity. More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. We use these PIT axioms to shed light on AC^0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. We show that either: a) Proving super-polynomial lower bounds on AC^0[p]-Frege implies VNP does not have polynomial-size circuits of depth d - a notoriously open question for d at least 4 - thus explaining the difficulty of lower bounds on AC^0[p]-Frege, or b) AC^0[p]-Frege cannot efficiently prove the depth d PIT axioms, and hence we have a lower bound on AC^0[p]-Frege. Using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity

    Combined decision procedures for nonlinear arithmetics, real and complex

    Get PDF
    We describe contributions to algorithmic proof techniques for deciding the satisfiability of boolean combinations of many-variable nonlinear polynomial equations and inequalities over the real and complex numbers. In the first half, we present an abstract theory of Grobner basis construction algorithms for algebraically closed fields of characteristic zero and use it to introduce and prove the correctness of Grobner basis methods tailored to the needs of modern satisfiability modulo theories (SMT) solvers. In the process, we use the technique of proof orders to derive a generalisation of S-polynomial superfluousness in terms of transfinite induction along an ordinal parameterised by a monomial order. We use this generalisation to prove the abstract (“strategy-independent”) admissibility of a number of superfluous S-polynomial criteria important for efficient basis construction. Finally, we consider local notions of proof minimality for weak Nullstellensatz proofs and give ideal-theoretic methods for computing complex “unsatisfiable cores” which contribute to efficient SMT solving in the context of nonlinear complex arithmetic. In the second half, we consider the problem of effectively combining a heterogeneous collection of decision techniques for fragments of the existential theory of real closed fields. We propose and investigate a number of novel combined decision methods and implement them in our proof tool RAHD (Real Algebra in High Dimensions). We build a hierarchy of increasingly powerful combined decision methods, culminating in a generalisation of partial cylindrical algebraic decomposition (CAD) which we call Abstract Partial CAD. This generalisation incorporates the use of arbitrary sound but possibly incomplete proof procedures for the existential theory of real closed fields as first-class functional parameters for “short-circuiting” expensive computations during the lifting phase of CAD. Identifying these proof procedure parameters formally with RAHD proof strategies, we implement the method in RAHD for the case of full-dimensional cell decompositions and investigate its efficacy with respect to the Brown-McCallum projection operator. We end with some wishes for the future

    Satisfiability Modulo Finite Fields

    Get PDF
    We study satisfiability modulo the theory of finite fields and give a decision procedure for this theory. We implement our procedure for prime fields inside the cvc5 SMT solver. Using this theory, we con- struct SMT queries that encode translation validation for various zero knowledge proof compilers applied to Boolean computations. We evalu- ate our procedure on these benchmarks. Our experiments show that our implementation is superior to previous approaches (which encode field arithmetic using integers or bit-vectors)

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Ontology and text mining: methods and applications for hypertrophic cardiomyopathy and beyond

    Get PDF
    In this thesis we describe a number of contributions across the deeply interlinked domains of ontology, text mining, and prognostic modelling. We explore and evaluate ontology interoperability, and develop new methods for synonym expansion and negation detection in biomedical text. In addition to evaluating these pieces of work individually, we use them to form the basis of a text mining pipeline that can identify and phenotype patients across a clinical text record, which is used to reveal hundreds of University Hospitals Birmingham patients diagnosed with hypertrophic cardiomyopathy who are unknown to the specialist clinic. The work culminates in the text mining results being used to enable prognostic modelling of complication development in patients with hypertrophic cardiomyopathy, finding that routine blood markers, in addition to already well known variables, are powerful predictors

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    SAT-based preimage attacks on SHA-1

    Get PDF
    Hash functions are important cryptographic primitives which map arbitrarily long messages to fixed-length message digests in such a way that: (1) it is easy to compute the message digest given a message, while (2) inverting the hashing process (e.g. finding a message that maps to a specific message digest) is hard. One attack against a hash function is an algorithm that nevertheless manages to invert the hashing process. Hash functions are used in e.g. authentication, digital signatures, and key exchange. A popular hash function used in many practical application scenarios is the Secure Hash Algorithm (SHA-1). In this thesis we investigate the current state of the art in carrying out preimage attacks against SHA-1 using SAT solvers, and we attempt to find out if there is any room for improvement in either the encoding or the solving processes. We run a series of experiments using SAT solvers on encodings of reduced-difficulty versions of SHA-1. Each experiment tests one aspect of the encoding or solving process, such as e.g. determining whether there exists an optimal restart interval or determining which branching heuristic leads to the best average solving time. An important part of our work is to use statistically sound methods, i.e. hypothesis tests which take sample size and variation into account. Our most important result is a new encoding of 32-bit modular addition which significantly reduces the time it takes the SAT solver to find a solution compared to previously known encodings. Other results include the fact that reducing the absolute size of the search space by fixing bits of the message up to a certain point actually results in an instance that is harder for the SAT solver to solve. We have also identified some slight improvements to the parameters used by the heuristics of the solver MiniSat; for example, contrary to assertions made in the literature, we find that using longer restart intervals improves the running time of the solver

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    corecore