4,420 research outputs found

    ENIGMA: Efficient Learning-based Inference Guiding Machine

    Full text link
    ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, using fast feature-based characterization of the clauses . The learned model is then tightly linked with the core prover and used as a basis of a new parameterized evaluation heuristic that provides fast ranking of all generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E's performance.Comment: Submitted to LPAR 201

    Planning and Proof Planning

    Get PDF
    . The paper adresses proof planning as a specific AI planning. It describes some peculiarities of proof planning and discusses some possible cross-fertilization of planning and proof planning. 1 Introduction Planning is an established area of Artificial Intelligence (AI) whereas proof planning introduced by Bundy in [2] still lives in its childhood. This means that the development of proof planning needs maturing impulses and the natural questions arise What can proof planning learn from its Big Brother planning?' and What are the specific characteristics of the proof planning domain that determine the answer?'. In turn for planning, the analysis of approaches points to a need of mature techniques for practical planning. Drummond [8], e.g., analyzed approaches with the conclusion that the success of Nonlin, SIPE, and O-Plan in practical planning can be attributed to hierarchical action expansion, the explicit representation of a plan's causal structure, and a very simple form of propo..

    An Overview of Backtrack Search Satisfiability Algorithms

    No full text
    Propositional Satisfiability (SAT) is often used as the underlying model for a significan

    Premise Selection and External Provers for HOL4

    Full text link
    Learning-assisted automated reasoning has recently gained popularity among the users of Isabelle/HOL, HOL Light, and Mizar. In this paper, we present an add-on to the HOL4 proof assistant and an adaptation of the HOLyHammer system that provides machine learning-based premise selection and automated reasoning also for HOL4. We efficiently record the HOL4 dependencies and extract features from the theorem statements, which form a basis for premise selection. HOLyHammer transforms the HOL4 statements in the various TPTP-ATP proof formats, which are then processed by the ATPs. We discuss the different evaluation settings: ATPs, accessible lemmas, and premise numbers. We measure the performance of HOLyHammer on the HOL4 standard library. The results are combined accordingly and compared with the HOL Light experiments, showing a comparably high quality of predictions. The system directly benefits HOL4 users by automatically finding proofs dependencies that can be reconstructed by Metis
    corecore