78 research outputs found

    On the decidability and complexity of the structural congruence for beta-binders

    Get PDF
    AbstractBeta-binders is a recent process calculus developed for modelling and simulating biological systems. As usual for process calculi, the semantic definition heavily relies on a structural congruence. The treatment of the structural congruence is essential for implementation. We present a subset of the calculus for which the structural congruence is decidable and a subset for which it is also efficiently solvable. The obtained results are a first step towards implementations

    The exp-log normal form of types

    Get PDF
    Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest non-trivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the beta-eta-equality of terms---and this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the meta-theoretic results on decidability of the isomorphism. In this paper, we present the exp-log normal form of types---derived from the representation of exponential polynomials via the unary exponential and logarithmic functions---that any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the high-school identities. We then show that the type normal form allows to reduce the standard beta-eta equational theory of the lambda calculus to a specialized version of itself, while preserving the completeness of equality on terms. We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coq-implemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing sophisticated term analyses

    Some Lambda Calculus and Type Theory Formalized

    Get PDF
    "This paper is about our hobby." That is the first sentence of [MP93], the first report on our formal development of lambda calculus and type theory, written in autumn 1992. We have continued to pursue this hobby on and off ever since, and have developed a substantial body of formal knowledge, including Church-Rosser and standardizationtheorems for beta reduction, and the basic theory ofPure Type Systems (PTS) leading to the strengthening theorem and type checking algorithms for PTS. Some of this work is reported in [MP93, vBJMP94, Pol94b, Pol95]. In the present paper we survey this work, including some new proofs, and point out what we feel has been learned about the general issues of formalizing mathematics. On the technical side, we describe an abstract, and simplified, proof of standardization for beta reduction, not previously published, that doesnot mention redex positions or residuals. On the general issues, we emphasize the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts. The LEGO Proof Development System [LP92] was used to check the work in an implementation of the Extended Calculus of Constructions(ECC) with inductive types [Luo94]. LEGO is a refinement styleproof checker, publicly available by ftp and WWW, with a User's Manual [LP92] and a large collection of examples. Section 1.3 contains information on accessing the formal development described in this paper. Other interesting examples formalized in LEGO include program specification and data refinement [Luo91], strong normalization of System F [Alt93], synthetic domain theory [Reu95, Reu96], and operational semantics for imperative programs [Sch97]

    Algorithms for Extended Alpha-Equivalence and Complexity

    Get PDF
    Equality of expressions in lambda-calculi, higher-order programming languages, higher-order programming calculi and process calculi is defined as alpha-equivalence. Permutability of bindings in let-constructs and structural congruence axioms extend alpha-equivalence. We analyse these extended alpha-equivalences and show that there are calculi with polynomial time algorithms, that a multiple-binding “let ” may make alpha-equivalence as hard as finding graph-isomorphisms, and that the replication operator in the pi-calculus may lead to an EXPSPACE-hard alpha-equivalence problem

    A Case Study on Logical Relations using Contextual Types

    Full text link
    Proofs by logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe the completeness proof of algorithmic equality for simply typed lambda-terms by Crary where we reason about logically equivalent terms in the proof environment Beluga. There are three key aspects we rely upon: 1) we encode lambda-terms together with their operational semantics and algorithmic equality using higher-order abstract syntax 2) we directly encode the corresponding logical equivalence of well-typed lambda-terms using recursive types and higher-order functions 3) we exploit Beluga's support for contexts and the equational theory of simultaneous substitutions. This leads to a direct and compact mechanization, demonstrating Beluga's strength at formalizing logical relations proofs.Comment: In Proceedings LFMTP 2015, arXiv:1507.0759

    A dependent nominal type theory

    Full text link
    Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, not type theory. Nominal extensions to simple, dependent and ML-like polymorphic languages have been studied, but decidability and normalization results have only been established for simple nominal type theories. We present a LF-style dependent type theory extended with name-abstraction types, prove soundness and decidability of beta-eta-equivalence checking, discuss adequacy and canonical forms via an example, and discuss extensions such as dependently-typed recursion and induction principles

    Extensional Rewriting with Sums

    Get PDF
    Abstract. Inspired by recent work on normalisation by evaluation for sums, we propose a normalising and confluent extensional rewriting theory for the simply-typed λ-calculus extended with sum types. As a corollary of confluence we obtain decidability for the extensional equational theory of simply-typed λ-calculus extended with sum types. Unlike previous decidability results, which rely on advanced rewriting techniques or advanced category theory, we only use standard techniques.

    Mechanising syntax with binders in Coq

    Get PDF
    Mechanising binders in general-purpose proof assistants such as Coq is cumbersome and difficult. Yet binders, substitutions, and instantiation of terms with substitutions are a critical ingredient of many programming languages. Any practicable mechanisation of the meta-theory of the latter hence requires a lean formalisation of the former. We investigate the topic from three angles: First, we realise formal systems with binders based on both pure and scoped de Bruijn algebras together with basic syntactic rewriting lemmas and automation. We automate this process in a compiler called Autosubst; our final tool supports many-sorted, variadic, and modular syntax. Second, we justify our choice of realisation and mechanise a proof of convergence of the sigma calculus, a calculus of explicit substitutions that is complete for equality of the de Bruijn algebra corresponding to the lambda calculus. Third, to demonstrate the practical usefulness of our approach, we provide concise, transparent, and accessible mechanised proofs for a variety of case studies refined to de Bruijn substitutions.Die Mechanisierung von Bindern in universellen Beweisassistenten wie Coq ist arbeitsaufwändig und schwierig. Binder, Substitutionen und die Instantiierung von Substitutionen sind jedoch kritischer Bestandteil vieler Programmiersprachen. Deshalb setzt eine praktikable Mechanisierung der Metatheorie von Programmiersprachen eine elegante Formalisierung von Bindern voraus. Wir nähern uns dem Thema aus drei Richtungen an: Zuerst realisieren wir formale Systeme mit Bindern mit Hilfe von reinen und indizierten de Bruijn Algebren, zusammen mit grundlegenden syntaktischen Gleichungen und Automatisierung. Wir automatisieren diesen Prozess in einem Kompilierer namens Autosubst. Unser finaler Kompilierer unterstützt Sortenlogik, variadische Syntax und modulare Syntax. Zweitens rechtfertigen wir unsere Repräsentation und mechanisieren einen Beweis der Konvergenz des SP-Kalküls, einem Kalkül expliziter Substitutionen der bezüglich der Gleichheit der puren de Bruijn Algebra des -Kalküls vollständig ist. Drittens entwickeln wir kurze, transparente und leicht zugängliche mechanisierte Beweise für diverse Fallstudien, die wir an de Bruijn Substitutionen angepasst haben. Wir weisen so die praktische Anwendbarkeit unseres Ansatzes nach

    From proof theory to theories theory

    Full text link
    In the last decades, several objects such as grammars, economical agents, laws of physics... have been defined as algorithms. In particular, after Brouwer, Heyting, and Kolomogorov, mathematical proofs have been defined as algorithms. In this paper, we show that mathematical theories can be also be defined as algorithms and that this definition has some advantages over the usual definition of theories as sets of axioms

    Design and Implementation of the Andromeda Proof Assistant

    Get PDF
    Andromeda is an LCF-style proof assistant where the user builds derivable judgments by writing code in a meta-level programming language AML. The only trusted component of Andromeda is a minimalist nucleus (an implementation of the inference rules of an object-level type theory), which controls construction and decomposition of type-theoretic judgments. Since the nucleus does not perform complex tasks like equality checking beyond syntactic equality, this responsibility is delegated to the user, who implements one or more equality checking procedures in the meta-language. The AML interpreter requests witnesses of equality from user code using the mechanism of algebraic operations and handlers. Dynamic checks in the nucleus guarantee that no invalid object-level derivations can be constructed. To demonstrate the flexibility of this system structure, we implemented a nucleus consisting of dependent type theory with equality reflection. Equality reflection provides a very high level of expressiveness, as it allows the user to add new judgmental equalities, but it also destroys desirable meta-theoretic properties of type theory (such as decidability and strong normalization). The power of effects and handlers in AML is demonstrated by a standard library that provides default algorithms for equality checking, computation of normal forms, and implicit argument filling. Users can extend these new algorithms by providing local "hints" or by completely replacing these algorithms for particular developments. We demonstrate the resulting system by showing how to axiomatize and compute with natural numbers, by axiomatizing the untyped lambda-calculus, and by implementing a simple automated system for managing a universe of types
    corecore