831 research outputs found

    A Vernacular for Coherent Logic

    Full text link
    We propose a simple, yet expressive proof representation from which proofs for different proof assistants can easily be generated. The representation uses only a few inference rules and is based on a frag- ment of first-order logic called coherent logic. Coherent logic has been recognized by a number of researchers as a suitable logic for many ev- eryday mathematical developments. The proposed proof representation is accompanied by a corresponding XML format and by a suite of XSL transformations for generating formal proofs for Isabelle/Isar and Coq, as well as proofs expressed in a natural language form (formatted in LATEX or in HTML). Also, our automated theorem prover for coherent logic exports proofs in the proposed XML format. All tools are publicly available, along with a set of sample theorems.Comment: CICM 2014 - Conferences on Intelligent Computer Mathematics (2014

    Compensation methods to support cooperative applications: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work and workflow systems. A schema designer is typically required to supply for each transaction another transaction to semantically undo the effects of . Little attention has been paid to the verification of the desirable properties of such operations, however. This paper demonstrates the use of a higher-order logic theorem prover for verifying that compensating transactions return a database to its original state. It is shown how an OODB schema is translated to the language of the theorem prover so that proofs can be performed on the compensating transactions

    ConSUS: A light-weight program conditioner

    Get PDF
    Program conditioning consists of identifying and removing a set of statements which cannot be executed when a condition of interest holds at some point in a program. It has been applied to problems in maintenance, testing, re-use and re-engineering. All current approaches to program conditioning rely upon both symbolic execution and reasoning about symbolic predicates. The reasoning can be performed by a ‘heavy duty’ theorem prover but this may impose unrealistic performance constraints. This paper reports on a lightweight approach to theorem proving using the FermaT Simplify decision procedure. This is used as a component to ConSUS, a program conditioning system for the Wide Spectrum Language WSL. The paper describes the symbolic execution algorithm used by ConSUS, which prunes as it conditions. The paper also provides empirical evidence that conditioning produces a significant reduction in program size and, although exponential in the worst case, the conditioning system has low degree polynomial behaviour in many cases, thereby making it scalable to unit level applications of program conditioning

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    Learning-Assisted Automated Reasoning with Flyspeck

    Full text link
    The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the proofs, producing an AI system capable of answering a wide range of mathematical queries automatically. The performance of this architecture is evaluated in a bootstrapping scenario emulating the development of Flyspeck from axioms to the last theorem, each time using only the previous theorems and proofs. It is shown that 39% of the 14185 theorems could be proved in a push-button mode (without any high-level advice and user interaction) in 30 seconds of real time on a fourteen-CPU workstation. The necessary work involves: (i) an implementation of sound translations of the HOL Light logic to ATP formalisms: untyped first-order, polymorphic typed first-order, and typed higher-order, (ii) export of the dependency information from HOL Light and ATP proofs for the machine learners, and (iii) choice of suitable representations and methods for learning from previous proofs, and their integration as advisors with HOL Light. This work is described and discussed here, and an initial analysis of the body of proofs that were found fully automatically is provided

    Asynchronous processing of Coq documents: from the kernel up to the user interface

    Get PDF
    The work described in this paper improves the reactivity of the Coq system by completely redesigning the way it processes a formal document. By subdividing such work into independent tasks the system can give precedence to the ones of immediate interest for the user and postpones the others. On the user side, a modern interface based on the PIDE middleware aggregates and present in a consistent way the output of the prover. Finally postponed tasks are processed exploiting modern, parallel, hardware to offer better scalability.Comment: in Proceedings of ITP, Aug 2015, Nanjing, Chin

    Purposive discovery of operations

    Get PDF
    The Generate, Prune & Prove (GPP) methodology for discovering definitions of mathematical operators is introduced. GPP is a task within the IL exploration discovery system. We developed GPP for use in the discovery of mathematical operators with a wider class of representations than was possible with the previous methods by Lenat and by Shen. GPP utilizes the purpose for which an operator is created to prune the possible definitions. The relevant search spaces are immense and there exists insufficient information for a complete evaluation of the purpose constraint, so it is necessary to perform a partial evaluation of the purpose (i.e., pruning) constraint. The constraint is first transformed so that it is operational with respect to the partial information, and then it is applied to examples in order to test the generated candidates for an operator's definition. In the GPP process, once a candidate definition survives this empirical prune, it is passed on to a theorem prover for formal verification. We describe the application of this methodology to the (re)discovery of the definition of multiplication for Conway numbers, a discovery which is difficult for human mathematicians. We successfully model this discovery process utilizing information which was reasonably available at the time of Conway's original discovery. As part of this discovery process, we reduce the size of the search space from a computationally intractable size to 3468 elements
    corecore