233 research outputs found

    The call-by-value Lambda-Calculus with generalized applications

    Get PDF
    The lambda-calculus with generalized applications is the Curry-Howard counterpart to the system of natural deduction with generalized elimination rules for intuitionistic implicational logic. In this paper we identify a call-by-value variant of the system and prove confluence, strong normalization, and standardization. In the end, we show that the cbn and cbv variants of the system simulate each other via mappings based on extensions of the "protecting-by-a-lambda" compilation technique.FCT -Fundação para a Ciência e a Tecnologia(UID/MAT/00013/2013

    An Abstract Factorization Theorem for Explicit Substitutions

    Get PDF
    We study a simple form of standardization, here called factorization, for explicit substitutions calculi, i.e. lambda-calculi where beta-reduction is decomposed in various rules. These calculi, despite being non-terminating and non-orthogonal, have a key feature: each rule terminates when considered separately. It is well-known that the study of rewriting properties simplifies in presence of termination (e.g. confluence reduces to local confluence). This remark is exploited to develop an abstract theorem deducing factorization from some axioms on local diagrams. The axioms are simple and easy to check, in particular they do not mention residuals. The abstract theorem is then applied to some explicit substitution calculi related to Proof-Nets. We show how to recover standardization by levels, we model both call-by-name and call-by-value calculi and we characterize linear head reduction via a factorization theorem for a linear calculus of substitutions

    Some properties of the lambda-mu-and-or-calculus

    Get PDF
    International audienceIn this paper, we present the lambda-mu-and-or-calculus which at the typed level corresponds to the full classical propositional natural deduction system. Church- Rosser property of this system is proved using the standardization and the finiteness developments theorem. We defi ne also the leftmost reduction and prove that it is a winning strateg

    Normalizing the Taylor expansion of non-deterministic {\lambda}-terms, via parallel reduction of resource vectors

    Full text link
    It has been known since Ehrhard and Regnier's seminal work on the Taylor expansion of λ\lambda-terms that this operation commutes with normalization: the expansion of a λ\lambda-term is always normalizable and its normal form is the expansion of the B\"ohm tree of the term. We generalize this result to the non-uniform setting of the algebraic λ\lambda-calculus, i.e. λ\lambda-calculus extended with linear combinations of terms. This requires us to tackle two difficulties: foremost is the fact that Ehrhard and Regnier's techniques rely heavily on the uniform, deterministic nature of the ordinary λ\lambda-calculus, and thus cannot be adapted; second is the absence of any satisfactory generic extension of the notion of B\"ohm tree in presence of quantitative non-determinism, which is reflected by the fact that the Taylor expansion of an algebraic λ\lambda-term is not always normalizable. Our solution is to provide a fine grained study of the dynamics of β\beta-reduction under Taylor expansion, by introducing a notion of reduction on resource vectors, i.e. infinite linear combinations of resource λ\lambda-terms. The latter form the multilinear fragment of the differential λ\lambda-calculus, and resource vectors are the target of the Taylor expansion of λ\lambda-terms. We show the reduction of resource vectors contains the image of any β\beta-reduction step, from which we deduce that Taylor expansion and normalization commute on the nose. We moreover identify a class of algebraic λ\lambda-terms, encompassing both normalizable algebraic λ\lambda-terms and arbitrary ordinary λ\lambda-terms: the expansion of these is always normalizable, which guides the definition of a generalization of B\"ohm trees to this setting

    The Measurement Calculus

    Get PDF
    Measurement-based quantum computation has emerged from the physics community as a new approach to quantum computation where the notion of measurement is the main driving force of computation. This is in contrast with the more traditional circuit model which is based on unitary operations. Among measurement-based quantum computation methods, the recently introduced one-way quantum computer stands out as fundamental. We develop a rigorous mathematical model underlying the one-way quantum computer and present a concrete syntax and operational semantics for programs, which we call patterns, and an algebra of these patterns derived from a denotational semantics. More importantly, we present a calculus for reasoning locally and compositionally about these patterns. We present a rewrite theory and prove a general standardization theorem which allows all patterns to be put in a semantically equivalent standard form. Standardization has far-reaching consequences: a new physical architecture based on performing all the entanglement in the beginning, parallelization by exposing the dependency structure of measurements and expressiveness theorems. Furthermore we formalize several other measurement-based models: Teleportation, Phase and Pauli models and present compositional embeddings of them into and from the one-way model. This allows us to transfer all the theory we develop for the one-way model to these models. This shows that the framework we have developed has a general impact on measurement-based computation and is not just particular to the one-way quantum computer.Comment: 46 pages, 2 figures, Replacement of quant-ph/0412135v1, the new version also include formalization of several other measurement-based models: Teleportation, Phase and Pauli models and present compositional embeddings of them into and from the one-way model. To appear in Journal of AC

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    A standardisation proof for algebraic pattern calculi

    Full text link
    This work gives some insights and results on standardisation for call-by-name pattern calculi. More precisely, we define standard reductions for a pattern calculus with constructor-based data terms and patterns. This notion is based on reduction steps that are needed to match an argument with respect to a given pattern. We prove the Standardisation Theorem by using the technique developed by Takahashi and Crary for lambda-calculus. The proof is based on the fact that any development can be specified as a sequence of head steps followed by internal reductions, i.e. reductions in which no head steps are involved.Comment: In Proceedings HOR 2010, arXiv:1102.346

    The Logical Essence of Compiling with Continuations

    Get PDF
    corecore