635 research outputs found

    A principled approach to programming with nested types in Haskell

    Get PDF
    Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell

    Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics

    Full text link
    We investigate the possibility of modelling the syntax and semantics of natural language by constraints, or rules, imposed by the multi-dimensional type theory Nabla. The only multiplicity we explicitly consider is two, namely one dimension for the syntax and one dimension for the semantics, but the general perspective is important. For example, issues of pragmatics could be handled as additional dimensions. One of the main problems addressed is the rather complicated repertoire of operations that exists besides the notion of categories in traditional Montague grammar. For the syntax we use a categorial grammar along the lines of Lambek. For the semantics we use so-called lexical and logical combinators inspired by work in natural logic. Nabla provides a concise interpretation and a sequent calculus as the basis for implementations.Comment: 20 page

    Semantics of a Typed Algebraic Lambda-Calculus

    Full text link
    Algebraic lambda-calculi have been studied in various ways, but their semantics remain mostly untouched. In this paper we propose a semantic analysis of a general simply-typed lambda-calculus endowed with a structure of vector space. We sketch the relation with two established vectorial lambda-calculi. Then we study the problems arising from the addition of a fixed point combinator and how to modify the equational theory to solve them. We sketch an algebraic vectorial PCF and its possible denotational interpretations

    Proofgold: Blockchain for Formal Methods

    Get PDF
    Proofgold is a peer to peer cryptocurrency making use of formal logic. Users can publish theories and then develop a theory by publishing documents with definitions, conjectures and proofs. The blockchain records the theories and their state of development (e.g., which theorems have been proven and when). Two of the main theories are a form of classical set theory (for formalizing mathematics) and an intuitionistic theory of higher-order abstract syntax (for reasoning about syntax with binders). We have also significantly modified the open source Proofgold Core client software to create a faster, more stable and more efficient client, Proofgold Lava. Two important changes are the cryptography code and the database code, and we discuss these improvements. We also discuss how the Proofgold network can be used to support large formalization efforts

    Initial Semantics for Reduction Rules

    Get PDF
    We give an algebraic characterization of the syntax and operational semantics of a class of simply-typed languages, such as the language PCF: we characterize simply-typed syntax with variable binding and equipped with reduction rules via a universal property, namely as the initial object of some category of models. For this purpose, we employ techniques developed in two previous works: in the first work we model syntactic translations between languages over different sets of types as initial morphisms in a category of models. In the second work we characterize untyped syntax with reduction rules as initial object in a category of models. In the present work, we combine the techniques used earlier in order to characterize simply-typed syntax with reduction rules as initial object in a category. The universal property yields an operator which allows to specify translations---that are semantically faithful by construction---between languages over possibly different sets of types. As an example, we upgrade a translation from PCF to the untyped lambda calculus, given in previous work, to account for reduction in the source and target. Specifically, we specify a reduction semantics in the source and target language through suitable rules. By equipping the untyped lambda calculus with the structure of a model of PCF, initiality yields a translation from PCF to the lambda calculus, that is faithful with respect to the reduction semantics specified by the rules. This paper is an extended version of an article published in the proceedings of WoLLIC 2012.Comment: Extended version of arXiv:1206.4547, proves a variant of a result of PhD thesis arXiv:1206.455

    Fixed point combinators as fixed points of higher-order fixed point generators

    Get PDF
    Corrado B\"ohm once observed that if YY is any fixed point combinator (fpc), then Y(λyx.x(yx))Y(\lambda yx.x(yx)) is again fpc. He thus discovered the first "fpc generating scheme" -- a generic way to build new fpcs from old. Continuing this idea, define an fpc generator\textit{fpc generator} to be any sequence of terms G1,,GnG_1,\dots,G_n such that YFPCYG1GnFPC Y \in FPC \Rightarrow Y G_1 \cdots G_n \in FPC In this contribution, we take first steps in studying the structure of (weak) fpc generators. We isolate several robust classes of such generators, by examining their elementary properties like injectivity and (weak) constancy. We provide sufficient conditions for existence of fixed points of a given generator (G1,,Gn)(G_1,\cdots,G_n): an fpc YY such that Y=YG1GnY = Y G_1 \cdots G_n. We conjecture that weak constancy is a necessary condition for existence of such (higher-order) fixed points. This statement generalizes Statman's conjecture on non-existence of "double fpcs": fixed points of the generator (G)=(λyx.x(yx))(G) = (\lambda yx.x(yx)) discovered by B\"ohm. Finally, we define and make a few observations about the monoid of (weak) fpc generators. This enables us to formulate new a conjecture about their structure

    Relational Graph Models at Work

    Full text link
    We study the relational graph models that constitute a natural subclass of relational models of lambda-calculus. We prove that among the lambda-theories induced by such models there exists a minimal one, and that the corresponding relational graph model is very natural and easy to construct. We then study relational graph models that are fully abstract, in the sense that they capture some observational equivalence between lambda-terms. We focus on the two main observational equivalences in the lambda-calculus, the theory H+ generated by taking as observables the beta-normal forms, and H* generated by considering as observables the head normal forms. On the one hand we introduce a notion of lambda-K\"onig model and prove that a relational graph model is fully abstract for H+ if and only if it is extensional and lambda-K\"onig. On the other hand we show that the dual notion of hyperimmune model, together with extensionality, captures the full abstraction for H*

    Modular, Fully-abstract Compilation by Approximate Back-translation

    Full text link
    A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back- translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from STLC to ULC, the lack of recursive types in the former prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. The proof extends easily to common compiler patterns such as modular compilation and it, to the best of our knowledge, it is the first compiler full abstraction proof to have been fully mechanised in Coq. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction
    corecore