640 research outputs found

    On generic context lemmas for lambda calculi with sharing

    Get PDF
    This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met

    On equivalences and standardization in a non-deterministic call-by-need lambda calculus

    Get PDF
    The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in a an extended lambda-calculus with case, constructors, seq, let, and choice, with a simple set of reduction rules. Unfortunately, a direct proof appears to be impossible. The correctness proof is by defining another calculus comprising the complex variants of copy, case-reduction and seq-reductions that use variablebinding chains. This complex calculus has well-behaved diagrams and allows a proof that of correctness of transformations, and also that the simple calculus defines an equivalent contextual order

    lim+, delta+, and Non-Permutability of beta-Steps

    Get PDF
    Using a human-oriented formal example proof of the (lim+) theorem, i.e. that the sum of limits is the limit of the sum, which is of value for reference on its own, we exhibit a non-permutability of beta-steps and delta+-steps (according to Smullyan's classification), which is not visible with non-liberalized delta-rules and not serious with further liberalized delta-rules, such as the delta++-rule. Besides a careful presentation of the search for a proof of (lim+) with several pedagogical intentions, the main subject is to explain why the order of beta-steps plays such a practically important role in some calculi.Comment: ii + 36 page

    Towards Correctness of Program Transformations Through Unification and Critical Pair Computation

    Get PDF
    Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455

    Sharing a Library between Proof Assistants: Reaching out to the HOL Family

    Get PDF
    We observe today a large diversity of proof systems. This diversity has the negative consequence that a lot of theorems are proved many times. Unlike programming languages, it is difficult for these systems to co-operate because they do not implement the same logic. Logical frameworks are a class of theorem provers that overcome this issue by their capacity of implementing various logics. In this work, we study the STTforall logic, an extension of Simple Type Theory that has been encoded in the logical framework Dedukti. We present a translation from this logic to OpenTheory, a proof system and interoperability tool between provers of the HOL family. We have used this translation to export an arithmetic library containing Fermat's little theorem to OpenTheory and to two other proof systems that are Coq and Matita.Comment: In Proceedings LFMTP 2018, arXiv:1807.0135

    Processes, Systems \& Tests: Defining Contextual Equivalences

    Full text link
    In this position paper, we would like to offer and defend a new template to study equivalences between programs -- in the particular framework of process algebras for concurrent computation.We believe that our layered model of development will clarify the distinction that is too often left implicit between the tasks and duties of the programmer and of the tester. It will also enlighten pre-existing issues that have been running across process algebras as diverse as the calculus of communicating systems, the π\pi-calculus -- also in its distributed version -- or mobile ambients.Our distinction starts by subdividing the notion of process itself in three conceptually separated entities, that we call \emph{Processes}, \emph{Systems} and \emph{Tests}.While the role of what can be observed and the subtleties in the definitions of congruences have been intensively studied, the fact that \emph{not every process can be tested}, and that \emph{the tester should have access to a different set of tools than the programmer} is curiously left out, or at least not often formally discussed.We argue that this blind spot comes from the under-specification of contexts -- environments in which comparisons takes place -- that play multiple distinct roles but supposedly always \enquote{stay the same}.We illustrate our statement with a simple Java example, the \enquote{usual} concurrent languages, but also back it up with λ\lambda-calculus and existing implementations of concurrent languages as well

    On correctness of buffer implementations in a concurrent lambda calculus with futures

    Get PDF
    Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs

    POPLMark reloaded: Mechanizing proofs by logical relations

    Get PDF
    We propose a new collection of benchmark problems in mechanizing the metatheory of programming languages, in order to compare and push the state of the art of proof assistants. In particular, we focus on proofs using logical relations (LRs) and propose establishing strong normalization of a simply typed calculus with a proof by Kripke-style LRs as a benchmark. We give a modern view of this well-understood problem by formulating our LR on well-typed terms. Using this case study, we share some of the lessons learned tackling this problem in different dependently typed proof environments. In particular, we consider the mechanization in Beluga, a proof environment that supports higher-order abstract syntax encodings and contrast it to the development and strategies used in general-purpose proof assistants such as Coq and Agda. The goal of this paper is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses and engage said community in the crafting of future benchmarks
    corecore