104 research outputs found

    UTP2: Higher-Order Equational Reasoning by Pointing

    Full text link
    We describe a prototype theorem prover, UTP2, developed to match the style of hand-written proof work in the Unifying Theories of Programming semantical framework. This is based on alphabetised predicates in a 2nd-order logic, with a strong emphasis on equational reasoning. We present here an overview of the user-interface of this prover, which was developed from the outset using a point-and-click approach. We contrast this with the command-line paradigm that continues to dominate the mainstream theorem provers, and raises the question: can we have the best of both worlds?Comment: In Proceedings UITP 2014, arXiv:1410.785

    Types as Resources for Classical Natural Deduction

    Get PDF
    We define two resource aware typing systems for the lambda-mu-calculus based on non-idempotent intersection and union types. The non-idempotent approach provides very simple combinatorial arguments - based on decreasing measures of type derivations - to characterize head and strongly normalizing terms. Moreover, typability provides upper bounds for the length of head-reduction sequences and maximal reduction sequences

    Modularity in Meta-Languages

    Get PDF
    A meta-language for semantics has a high degree of modularitywhen descriptions of individual language constructs can be formulated independently using it, and do not require reformulation when new constructs are added to the described language. The quest for modularity in semantic meta-languages has been going on for more than two decades. Here, most of the main meta-languages for operational, denotational, and hybrid styles of semantics are compared regarding their modularity. A simple bench-mark is used: describing the semantics of a pure functional language, then extending the described language with references, exceptions, and concurrency constructs. For each style of semantics, at least one of the considered meta-languages appears to provide a high degree of modularity

    An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata

    Full text link
    In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form ⋀i=1nGFφi∨FGψi\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee \mathbf{F}\mathbf{G} \psi_i, where φi\varphi_i and ψi\psi_i contain only past operators. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for LTL. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free regular expressions and back to formulas. We improve on both points. We present a direct and purely syntactic normalisation procedure for LTL yielding a normal form, comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up. As an application, we derive a simple algorithm to translate LTL into deterministic Rabin automata. The algorithm normalises the formula, translates it into a special very weak alternating automaton, and applies a simple determinisation procedure, valid only for these special automata.Comment: This is the extended version of the referenced conference paper and contains an appendix with additional materia

    A deep quantitative type system

    Get PDF
    We investigate intersection types and resource lambda-calculus in deep-inference proof theory. We give a unified type system that is parametric in various aspects: it encompasses resource calculi, intersection-typed lambda-calculus, and simply-typed lambda-calculus; it accommodates both idempotence and non-idempotence; it characterizes strong and weak normalization; and it does so while allowing a range of algebraic laws to determine reduction behaviour, for various quantitative effects. We give a parametric resource calculus with explicit sharing, the "collection calculus", as a Curry-Howard interpretation of the type system, that embodies these computational properties

    Refunctionalization at Work

    Get PDF
    We present the left inverse of Reynolds's defunctionalization and we show its relevance to programming and to programming languages. We propose two methods to transform a program that is almost in defunctionalized form into one that is actually in defunctionalized form, and we illustrate them with a recognizer for Dyck words and with Dijkstra's shunting-yard algorithm

    An Explicit Framework for Interaction Nets

    Full text link
    Interaction nets are a graphical formalism inspired by Linear Logic proof-nets often used for studying higher order rewriting e.g. \Beta-reduction. Traditional presentations of interaction nets are based on graph theory and rely on elementary properties of graph theory. We give here a more explicit presentation based on notions borrowed from Girard's Geometry of Interaction: interaction nets are presented as partial permutations and a composition of nets, the gluing, is derived from the execution formula. We then define contexts and reduction as the context closure of rules. We prove strong confluence of the reduction within our framework and show how interaction nets can be viewed as the quotient of some generalized proof-nets
    • …
    corecore