43 research outputs found

    Decidability of the Monadic Shallow Linear First-Order Fragment with Straight Dismatching Constraints

    Get PDF
    The monadic shallow linear Horn fragment is well-known to be decidable and has many application, e.g., in security protocol analysis, tree automata, or abstraction refinement. It was a long standing open problem how to extend the fragment to the non-Horn case, preserving decidability, that would, e.g., enable to express non-determinism in protocols. We prove decidability of the non-Horn monadic shallow linear fragment via ordered resolution further extended with dismatching constraints and discuss some applications of the new decidable fragment.Comment: 29 pages, long version of CADE-26 pape

    gym-saturation: Gymnasium environments for saturation provers (System description)

    Full text link
    This work describes a new version of a previously published Python package - gym-saturation: a collection of OpenAI Gym environments for guiding saturation-style provers based on the given clause algorithm with reinforcement learning. We contribute usage examples with two different provers: Vampire and iProver. We also have decoupled the proof state representation from reinforcement learning per se and provided examples of using a known ast2vec Python code embedding model as a first-order logic representation. In addition, we demonstrate how environment wrappers can transform a prover into a problem similar to a multi-armed bandit. We applied two reinforcement learning algorithms (Thompson sampling and Proximal policy optimisation) implemented in Ray RLlib to show the ease of experimentation with the new release of our package.Comment: 13 pages, 3 figures. This version of the contribution has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-031-43513-3_1

    Learning Instantiation in First-Order Logic

    Get PDF
    Contains fulltext : 286055.pdf (Publisher’s version ) (Open Access)AITP 202

    Graph Sequence Learning for Premise Selection

    Full text link
    Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7% improvement gain over the baseline.Comment: 17 page

    Defining the meaning of TPTP formatted proofs

    Get PDF
    International audienceThe TPTP library is one of the leading problem libraries in the automated theorem proving community. Over time, support was added for problems beyond those in first-order clausal form. TPTP has also been augmented with support for various proof formats output by theorem provers. Such proofs can also be maintained in the TSTP proof library. In this paper we propose an extension of this framework to support the semantic specification of the inference rules used in proofs
    corecore