118 research outputs found

    Inferring, splicing, and the Stoic analysis of argument

    Get PDF

    The foundation of a generic theorem prover

    Get PDF
    Isabelle is an interactive theorem prover that supports a variety of logics. It represents rules as propositions (not as functions) and builds proofs by combining rules. These operations constitute a meta-logic (or `logical framework') in which the object-logics are formalized. Isabelle is now based on higher-order logic -- a precise and well-understood foundation. Examples illustrate use of this meta-logic to formalize logics and proofs. Axioms for first-order logic are shown sound and complete. Backwards proof is formalized by meta-reasoning about object-level entailment. Higher-order logic has several practical advantages over other meta-logics. Many proof techniques are known, such as Huet's higher-order unification procedure

    NATURAL DEDUCTION AS HIGHER-ORDER RESOLUTION

    Get PDF
    An interactive theorem prover, Isabelle, is under development. In LCF, each inference rule is represented by one function for forwards proof and another (a tactic) for backwards proof. In Isabelle, each inference rule is represented by a Horn clause. Resolution gives both forwards and backwards proof, supporting a large class of logics. Isabelle has been used to prove theorems in Martin-L\"of's Constructive Type Theory. Quantifiers pose several difficulties: substitution, bound variables, Skolemization. Isabelle's representation of logical syntax is the typed lambda-calculus, requiring higher- order unification. It may have potential for logic programming. Depth-first subgoaling along inference rules constitutes a higher-order Prolog

    The Grail theorem prover: Type theory for syntax and semantics

    Full text link
    As the name suggests, type-logical grammars are a grammar formalism based on logic and type theory. From the prespective of grammar design, type-logical grammars develop the syntactic and semantic aspects of linguistic phenomena hand-in-hand, letting the desired semantics of an expression inform the syntactic type and vice versa. Prototypical examples of the successful application of type-logical grammars to the syntax-semantics interface include coordination, quantifier scope and extraction.This chapter describes the Grail theorem prover, a series of tools for designing and testing grammars in various modern type-logical grammars which functions as a tool . All tools described in this chapter are freely available

    Analytic Tableaux for Simple Type Theory and its First-Order Fragment

    Full text link
    We study simple type theory with primitive equality (STT) and its first-order fragment EFO, which restricts equality and quantification to base types but retains lambda abstraction and higher-order variables. As deductive system we employ a cut-free tableau calculus. We consider completeness, compactness, and existence of countable models. We prove these properties for STT with respect to Henkin models and for EFO with respect to standard models. We also show that the tableau system yields a decision procedure for three EFO fragments

    Monadic translation of classical sequent calculus

    Get PDF
    International audienceWe study monadic translations of the call-by-name (cbn) and call-by-value (cbv) fragments of the classical sequent calculus λμμ~{\overline{\lambda}\mu\tilde{\mu}} due to Curien and Herbelin, and give modular and syntactic proofs of strong normalisation. The target of the translations is a new meta-language for classical logic, named monadic λμ. This language is a monadic reworking of Parigot's λμ-calculus, where the monadic binding is confined to commands, thus integrating the monad with the classical features. Also, its μ-reduction rule is replaced by a rule expressing the interaction between monadic binding and μ-abstraction.Our monadic translations produce very tight simulations of the respective fragments of λμμ~{\overline{\lambda}\mu\tilde{\mu}} within monadic λμ, with reduction steps of λμμ~{\overline{\lambda}\mu\tilde{\mu}} being translated in a 1–1 fashion, except for β steps, which require two steps. The monad of monadic λμ can be instantiated to the continuations monad so as to ensure strict simulation of monadic λμ within simply typed λ-calculus with β- and η-reduction. Through strict simulation, the strong normalisation of simply typed λ-calculus is inherited by monadic λμ, and then by cbn and cbv λμμ~{\overline{\lambda}\mu\tilde{\mu}}, thus reproving strong normalisation in an elementary syntactical way for these fragments of λμμ~{\overline{\lambda}\mu\tilde{\mu}}, and establishing it for our new calculus. These results extend to second-order logic, with polymorphic λ-calculus as the target, giving new strong normalisation results for classical second-order logic in sequent calculus style.CPS translations of cbn and cbv λμμ~{\overline{\lambda}\mu\tilde{\mu}} with the strict simulation property are obtained by composing our monadic translations with the continuations-monad instantiation. In an appendix to the paper, we investigate several refinements of the continuations-monad instantiation in order to obtain in a modular way improvements of the CPS translations enjoying extra properties like simulation by cbv β-reduction or reduction of administrative redexes at compile time

    Decidability for Non-Standard Conversions in Typed Lambda-Calculi

    Get PDF
    This thesis studies the decidability of conversions in typed lambda-calculi, along with the algorithms allowing for this decidability. Our study takes in consideration conversions going beyond the traditional beta, eta, or permutative conversions (also called commutative conversions). To decide these conversions, two classes of algorithms compete, the algorithms based on rewriting, here the goal is to decompose and orient the conversion so as to obtain a convergent system, these algorithms then boil down to rewrite the terms until they reach an irreducible forms; and the "reduction free" algorithms where the conversion is decided recursively by a detour via a meta-language. Throughout this thesis, we strive to explain the latter thanks to the former

    Specifying Theorem Provers in a Higher-Order Logic Programming Language

    Get PDF
    Since logic programming systems directly implement search and unification and since these operations are essential for the implementation of most theorem provers, logic programming languages should make ideal implementation languages for theorem provers. We shall argue that this is indeed the case if the logic programming language is extended in several ways. We present an extended logic programming language where first-order terms are replaced with simply-typed λ-terms, higher-order unification replaces firstorder unification, and implication and universal quantification are allowed in queries and the bodies of clauses. This language naturally specifies inference rules for various proof systems. The primitive search operations required to search for proofs generally have very simple implementations using the logical connectives of this extended logic programming language. Higher-order unification, which provides sophisticated pattern matching on formulas and proofs, can be used to determine when and at what instance an inference rule can be employed in the search for a proof. Tactics and tacticals, which provide a framework for high-level control over search, can also be directly implemented in this extended language. The theorem provers presented in this paper have been implemented in the higher-order logic programming language λProlog
    corecore