6,117 research outputs found

    Addressing Machines as models of lambda-calculus

    Full text link
    Turing machines and register machines have been used for decades in theoretical computer science as abstract models of computation. Also the λ\lambda-calculus has played a central role in this domain as it allows to focus on the notion of functional computation, based on the substitution mechanism, while abstracting away from implementation details. The present article starts from the observation that the equivalence between these formalisms is based on the Church-Turing Thesis rather than an actual encoding of λ\lambda-terms into Turing (or register) machines. The reason is that these machines are not well-suited for modelling \lam-calculus programs. We study a class of abstract machines that we call \emph{addressing machine} since they are only able to manipulate memory addresses of other machines. The operations performed by these machines are very elementary: load an address in a register, apply a machine to another one via their addresses, and call the address of another machine. We endow addressing machines with an operational semantics based on leftmost reduction and study their behaviour. The set of addresses of these machines can be easily turned into a combinatory algebra. In order to obtain a model of the full untyped λ\lambda-calculus, we need to introduce a rule that bares similarities with the ω\omega-rule and the rule ζβ\zeta_\beta from combinatory logic

    Discriminating Lambda-Terms Using Clocked Boehm Trees

    Full text link
    As observed by Intrigila, there are hardly techniques available in the lambda-calculus to prove that two lambda-terms are not beta-convertible. Techniques employing the usual Boehm Trees are inadequate when we deal with terms having the same Boehm Tree (BT). This is the case in particular for fixed point combinators, as they all have the same BT. Another interesting equation, whose consideration was suggested by Scott, is BY = BYS, an equation valid in the classical model P-omega of lambda-calculus, and hence valid with respect to BT-equality but nevertheless the terms are beta-inconvertible. To prove such beta-inconvertibilities, we employ `clocked' BT's, with annotations that convey information of the tempo in which the data in the BT are produced. Boehm Trees are thus enriched with an intrinsic clock behaviour, leading to a refined discrimination method for lambda-terms. The corresponding equality is strictly intermediate between beta-convertibility and Boehm Tree equality, the equality in the model P-omega. An analogous approach pertains to Levy-Longo and Berarducci Trees. Our refined Boehm Trees find in particular an application in beta-discriminating fixed point combinators (fpc's). It turns out that Scott's equation BY = BYS is the key to unlocking a plethora of fpc's, generated by a variety of production schemes of which the simplest was found by Boehm, stating that new fpc's are obtained by postfixing the term SI, also known as Smullyan's Owl. We prove that all these newly generated fpc's are indeed new, by considering their clocked BT's. Even so, not all pairs of new fpc's can be discriminated this way. For that purpose we increase the discrimination power by a precision of the clock notion that we call `atomic clock'.Comment: arXiv admin note: substantial text overlap with arXiv:1002.257

    ASMs and Operational Algorithmic Completeness of Lambda Calculus

    Get PDF
    We show that lambda calculus is a computation model which can step by step simulate any sequential deterministic algorithm for any computable function over integers or words or any datatype. More formally, given an algorithm above a family of computable functions (taken as primitive tools, i.e., kind of oracle functions for the algorithm), for every constant K big enough, each computation step of the algorithm can be simulated by exactly K successive reductions in a natural extension of lambda calculus with constants for functions in the above considered family. The proof is based on a fixed point technique in lambda calculus and on Gurevich sequential Thesis which allows to identify sequential deterministic algorithms with Abstract State Machines. This extends to algorithms for partial computable functions in such a way that finite computations ending with exceptions are associated to finite reductions leading to terms with a particular very simple feature.Comment: 37 page

    An Invariant Cost Model for the Lambda Calculus

    Full text link
    We define a new cost model for the call-by-value lambda-calculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the call-by-value lambda-calculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties of usual beta-reduction, without any reference to a specific machine or evaluator. In particular, the cost of a single beta reduction is proportional to the difference between the size of the redex and the size of the reduct. In this way, the total cost of normalizing a lambda term will take into account the size of all intermediate results (as well as the number of steps to normal form).Comment: 19 page

    Initial Semantics for Reduction Rules

    Get PDF
    We give an algebraic characterization of the syntax and operational semantics of a class of simply-typed languages, such as the language PCF: we characterize simply-typed syntax with variable binding and equipped with reduction rules via a universal property, namely as the initial object of some category of models. For this purpose, we employ techniques developed in two previous works: in the first work we model syntactic translations between languages over different sets of types as initial morphisms in a category of models. In the second work we characterize untyped syntax with reduction rules as initial object in a category of models. In the present work, we combine the techniques used earlier in order to characterize simply-typed syntax with reduction rules as initial object in a category. The universal property yields an operator which allows to specify translations---that are semantically faithful by construction---between languages over possibly different sets of types. As an example, we upgrade a translation from PCF to the untyped lambda calculus, given in previous work, to account for reduction in the source and target. Specifically, we specify a reduction semantics in the source and target language through suitable rules. By equipping the untyped lambda calculus with the structure of a model of PCF, initiality yields a translation from PCF to the lambda calculus, that is faithful with respect to the reduction semantics specified by the rules. This paper is an extended version of an article published in the proceedings of WoLLIC 2012.Comment: Extended version of arXiv:1206.4547, proves a variant of a result of PhD thesis arXiv:1206.455

    High-level signatures and initial semantics

    Get PDF
    We present a device for specifying and reasoning about syntax for datatypes, programming languages, and logic calculi. More precisely, we study a notion of signature for specifying syntactic constructions. In the spirit of Initial Semantics, we define the syntax generated by a signature to be the initial object---if it exists---in a suitable category of models. In our framework, the existence of an associated syntax to a signature is not automatically guaranteed. We identify, via the notion of presentation of a signature, a large class of signatures that do generate a syntax. Our (presentable) signatures subsume classical algebraic signatures (i.e., signatures for languages with variable binding, such as the pure lambda calculus) and extend them to include several other significant examples of syntactic constructions. One key feature of our notions of signature, syntax, and presentation is that they are highly compositional, in the sense that complex examples can be obtained by assembling simpler ones. Moreover, through the Initial Semantics approach, our framework provides, beyond the desired algebra of terms, a well-behaved substitution and the induction and recursion principles associated to the syntax. This paper builds upon ideas from a previous attempt by Hirschowitz-Maggesi, which, in turn, was directly inspired by some earlier work of Ghani-Uustalu-Hamana and Matthes-Uustalu. The main results presented in the paper are computer-checked within the UniMath system.Comment: v2: extended version of the article as published in CSL 2018 (http://dx.doi.org/10.4230/LIPIcs.CSL.2018.4); list of changes given in Section 1.5 of the paper; v3: small corrections throughout the paper, no major change

    Glueability of Resource Proof-Structures: Inverting the Taylor Expansion

    Get PDF
    A Multiplicative-Exponential Linear Logic (MELL) proof-structure can be expanded into a set of resource proof-structures: its Taylor expansion. We introduce a new criterion characterizing those sets of resource proof-structures that are part of the Taylor expansion of some MELL proof-structure, through a rewriting system acting both on resource and MELL proof-structures
    corecore