22,173 research outputs found

    Linear Logic and Strong Normalization

    Get PDF
    Strong normalization for linear logic requires elaborated rewriting techniques. In this paper we give a new presentation of MELL proof nets, without any commutative cut-elimination rule. We show how this feature induces a compact and simple proof of strong normalization, via reducibility candidates. It is the first proof of strong normalization for MELL which does not rely on any form of confluence, and so it smoothly scales up to full linear logic. Moreover, it is an axiomatic proof, as more generally it holds for every set of rewriting rules satisfying three very natural requirements with respect to substitution, commutation with promotion, full composition, and Kesner\u27s IE property. The insight indeed comes from the theory of explicit substitutions, and from looking at the exponentials as a substitution device

    Strong normalization of lambda-bar-mu-mu-tilde-calculus with explicit substitutions

    Get PDF
    International audienceThe lambda-bar-mu-mu-tilde-calculus, defined by Curien and Herbelin, is a variant of the lambda-mu-calculus that exhibits symmetries such as terms/contexts and call-by-name/call-by-value. Since it is a symmetric, and hence a non-deterministic calculus, usual proof techniques of normalization needs some adjustments to work in this setting. Here we prove the strong normalization (SN) of simply typed lambda-bar-mu-mu-tilde-calculus with explicit substitutions. For that purpose, we first prove SN of simply typed lambda-bar-mu-mu-tilde-calculus (by a variant of the reducibility technique from Barbanera and Berardi), then we formalize a proof technique of SN via PSN (preservation of strong normalization), and we prove PSN by the perpetuality technique, as formalized by Bonelli

    Strong Normalization of the Typed lambda_ws-calculus

    Get PDF
    International audienceThe lambda_ws-calculus is a lambda-calculus with explicit substitutions that satisfies the desired properties of such a calculus: step by step simulation of beta, confluence on terms with meta-variables and preservation of the strong normalization. It was conjectured that simply typed terms of lambda_ws are strongly normalizable. This was proved in by Di Cosmo & al. by using a translation of lambda_ws into the proof nets of linear logic. We give here a direct and elementary proof of this result. The strong normalization is also proved for terms typable with second order types (the extension of Girard's system~F). This is a new result

    Strong normalization of lambda-Sym-Prop- and lambda-bar-mu-mu-tilde-star- calculi

    Get PDF
    In this paper we give an arithmetical proof of the strong normalization of lambda-Sym-Prop of Berardi and Barbanera [1], which can be considered as a formulae-as-types translation of classical propositional logic in natural deduction style. Then we give a translation between the lambda-Sym-Prop-calculus and the lambda-bar-mu-mu-tilde-star-calculus, which is the implicational part of the lambda-bar-mu-mu-tilde-calculus invented by Curien and Herbelin [3] extended with negation. In this paper we adapt the method of David and Nour [4] for proving strong normalization. The novelty in our proof is the notion of zoom-in sequences of redexes, which leads us directly to the proof of the main theorem

    Metaconfluence of Calculi with Explicit Substitutions at a Distance

    Get PDF
    Confluence is a key property of rewriting calculi that guarantees uniqueness of normal-forms when they exist. Metaconfluence is even more general, and guarantees confluence on open/meta terms, i.e. terms with holes, called metavariables that can be filled up with other (open/meta) terms. The difficulty to deal with open terms comes from the fact that the structure of metaterms is only partially known, so that some reduction rules became blocked by the metavariables. In this work, we establish metaconfluence for a family of calculi with explicit substitutions (ES) that enjoy preservation of strong-normalization (PSN) and that act at a distance. For that, we first extend the notion of reduction on metaterms in such a way that explicit substitutions are never structurally moved, i.e. they also act at a distance on metaterms. The resulting reduction relations are still rewriting systems, i.e. they do not include equational axioms, thus providing for the first time an interesting family of lambda-calculi with explicit substitutions that enjoy both PSN and metaconfluence without requiring sophisticated notions of reduction modulo a set of equations

    A Case Study on Logical Relations using Contextual Types

    Full text link
    Proofs by logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe the completeness proof of algorithmic equality for simply typed lambda-terms by Crary where we reason about logically equivalent terms in the proof environment Beluga. There are three key aspects we rely upon: 1) we encode lambda-terms together with their operational semantics and algorithmic equality using higher-order abstract syntax 2) we directly encode the corresponding logical equivalence of well-typed lambda-terms using recursive types and higher-order functions 3) we exploit Beluga's support for contexts and the equational theory of simultaneous substitutions. This leads to a direct and compact mechanization, demonstrating Beluga's strength at formalizing logical relations proofs.Comment: In Proceedings LFMTP 2015, arXiv:1507.0759

    Higher-Order Termination: from Kruskal to Computability

    Get PDF
    Termination is a major question in both logic and computer science. In logic, termination is at the heart of proof theory where it is usually called strong normalization (of cut elimination). In computer science, termination has always been an important issue for showing programs correct. In the early days of logic, strong normalization was usually shown by assigning ordinals to expressions in such a way that eliminating a cut would yield an expression with a smaller ordinal. In the early days of verification, computer scientists used similar ideas, interpreting the arguments of a program call by a natural number, such as their size. Showing the size of the arguments to decrease for each recursive call gives a termination proof of the program, which is however rather weak since it can only yield quite small ordinals. In the sixties, Tait invented a new method for showing cut elimination of natural deduction, based on a predicate over the set of terms, such that the membership of an expression to the predicate implied the strong normalization property for that expression. The predicate being defined by induction on types, or even as a fixpoint, this method could yield much larger ordinals. Later generalized by Girard under the name of reducibility or computability candidates, it showed very effective in proving the strong normalization property of typed lambda-calculi..
    • …
    corecore