238 research outputs found

    A Denotational Account of Untyped Normalization by Evaluation

    Full text link

    From Self-Interpreters to Normalization by Evaluation

    Get PDF
    We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpreter for the ?-calculus. The derived algorithm is not novel and indeed other published algorithms may be obtained in the same manner through appropriate adaptations to the representation scheme

    Denotational Aspects of Untyped Normalization by Evaluation

    Get PDF
    We show that the standard normalization-by-evaluation construction for the simply-typed lambda_{beta eta}-calculus has a natural counterpart for the untyped lambda_beta-calculus, with the central type-indexed logical relation replaced by a "recursively defined'' invariant relation, in the style of Pitts. In fact, the construction can be seen as generalizing a computational-adequacy argument for an untyped, call-by-name language to normalization instead of evaluation. In the untyped setting, not all terms have normal forms, so the normalization function is necessarily partial. We establish its correctness in the senses of soundness (the output term, if any, is in normal form and beta-equivalent to the input term); identification ( beta-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness, as well as the proofs, extend naturally to this generalization

    Efficient normalization by evaluation

    Get PDF
    International audienceDependently typed theorem provers allow arbitrary terms in types. It is convenient to identify large classes of terms during type checking, hence many such systems provision some form of conversion rule. A standard algorithm for testing the convertibility of two types consists in normalizing them, then testing for syntactic equality of the normal forms. Normalization by evaluation is a standard technique enabling the use of existing compilers and runtimes for functional languages to implement normalizers, without peaking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying) and the reuse of pattern matching facilities of the evaluator for datatypes, we may obtain a normalizer that evaluates non-functional values about as fast as the underlying evaluator, but as an added benefit can also fully normalize functional values — or to put it another way, partially evaluates functions efficiently

    On the Expressive Power of User-Defined Effects: Effect Handlers, Monadic Reflection, Delimited Control

    Get PDF
    We compare the expressive power of three programming abstractions for user-defined computational effects: Bauer and Pretnar's effect handlers, Filinski's monadic reflection, and delimited control without answer-type-modification. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features. We present three calculi, one per abstraction, extending Levy's call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic meta-theoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen's notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. We supplement our development with a mechanised Abella formalisation

    Lecture notes on the lambda calculus

    Get PDF
    This is a set of lecture notes that developed out of courses on the lambda calculus that I taught at the University of Ottawa in 2001 and at Dalhousie University in 2007 and 2013. Topics covered in these notes include the untyped lambda calculus, the Church-Rosser theorem, combinatory algebras, the simply-typed lambda calculus, the Curry-Howard isomorphism, weak and strong normalization, polymorphism, type inference, denotational semantics, complete partial orders, and the language PCF.Comment: 120 pages. Added in v2: section on polymorphis
    • …
    corecore