26,183 research outputs found

    On sets of terms with a given intersection type

    Full text link
    We are interested in how much of the structure of a strongly normalizable lambda term is captured by its intersection types and how much all the terms of a given type have in common. In this note we consider the theory BCD (Barendregt, Coppo, and Dezani) of intersection types without the top element. We show: for each strongly normalizable lambda term M, with beta-eta normal form N, there exists an intersection type A such that, in BCD, N is the unique beta-eta normal term of type A. A similar result holds for finite sets of strongly normalizable terms for each intersection type A if the set of all closed terms M such that, in BCD, M has type A, is infinite then, when closed under beta-eta conversion, this set forms an adaquate numeral system for untyped lambda calculus. A number of related results are also proved

    On generic context lemmas for lambda calculi with sharing

    Get PDF
    This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met

    The dagger lambda calculus

    Full text link
    We present a novel lambda calculus that casts the categorical approach to the study of quantum protocols into the rich and well established tradition of type theory. Our construction extends the linear typed lambda calculus with a linear negation of "trivialised" De Morgan duality. Reduction is realised through explicit substitution, based on a symmetric notion of binding of global scope, with rules acting on the entire typing judgement instead of on a specific subterm. Proofs of subject reduction, confluence, strong normalisation and consistency are provided, and the language is shown to be an internal language for dagger compact categories.Comment: In Proceedings QPL 2014, arXiv:1412.810

    A Lambda Term Representation Inspired by Linear Ordered Logic

    Get PDF
    We introduce a new nameless representation of lambda terms inspired by ordered logic. At a lambda abstraction, number and relative position of all occurrences of the bound variable are stored, and application carries the additional information where to cut the variable context into function and argument part. This way, complete information about free variable occurrence is available at each subterm without requiring a traversal, and environments can be kept exact such that they only assign values to variables that actually occur in the associated term. Our approach avoids space leaks in interpreters that build function closures. In this article, we prove correctness of the new representation and present an experimental evaluation of its performance in a proof checker for the Edinburgh Logical Framework. Keywords: representation of binders, explicit substitutions, ordered contexts, space leaks, Logical Framework.Comment: In Proceedings LFMTP 2011, arXiv:1110.668

    Normalization by Evaluation in the Delay Monad: A Case Study for Coinduction via Copatterns and Sized Types

    Get PDF
    In this paper, we present an Agda formalization of a normalizer for simply-typed lambda terms. The normalizer consists of two coinductively defined functions in the delay monad: One is a standard evaluator of lambda terms to closures, the other a type-directed reifier from values to eta-long beta-normal forms. Their composition, normalization-by-evaluation, is shown to be a total function a posteriori, using a standard logical-relations argument. The successful formalization serves as a proof-of-concept for coinductive programming and reasoning using sized types and copatterns, a new and presently experimental feature of Agda.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    A Case Study on Logical Relations using Contextual Types

    Full text link
    Proofs by logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe the completeness proof of algorithmic equality for simply typed lambda-terms by Crary where we reason about logically equivalent terms in the proof environment Beluga. There are three key aspects we rely upon: 1) we encode lambda-terms together with their operational semantics and algorithmic equality using higher-order abstract syntax 2) we directly encode the corresponding logical equivalence of well-typed lambda-terms using recursive types and higher-order functions 3) we exploit Beluga's support for contexts and the equational theory of simultaneous substitutions. This leads to a direct and compact mechanization, demonstrating Beluga's strength at formalizing logical relations proofs.Comment: In Proceedings LFMTP 2015, arXiv:1507.0759

    Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times

    Full text link
    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results
    corecore