182 research outputs found

    An extensible equality checking algorithm for dependent type theories

    Get PDF
    We present a general and user-extensible equality checking algorithm that is applicable to a large class of type theories. The algorithm has a type-directed phase for applying extensionality rules and a normalization phase based on computation rules, where both kinds of rules are defined using the type-theoretic concept of object-invertible rules. We also give sufficient syntactic criteria for recognizing such rules, as well as a simple pattern-matching algorithm for applying them. A third component of the algorithm is a suitable notion of principal arguments, which determines a notion of normal form. By varying these, we obtain known notions, such as weak head-normal and strong normal forms. We prove that our algorithm is sound. We implemented it in the Andromeda~2 proof assistant, which supports user-definable type theories. The user need only provide the equality rules they wish to use, which the algorithm automatically classifies as computation or extensionality rules, and select appropriate principal arguments

    On the Logical Strength of Confluence and Normalisation for Cyclic Proofs

    Get PDF

    13th international workshop on expressiveness in concurrency

    Get PDF

    The SL synchronous language, revisited

    Get PDF
    International audienceWe revisit the SL synchronous programming model introduced by Boussinot and De Simone (IEEE, Trans. on Soft. Eng., 1996). We discuss an alternative design of the model including thread spawning and recursive definitions and we explore some basic properties of the revised model: determinism, reactivity, CPS translation to a tail recursive form, computational expressivity, and a compositional notion of program equivalence

    An Abstract Factorization Theorem for Explicit Substitutions

    Get PDF
    We study a simple form of standardization, here called factorization, for explicit substitutions calculi, i.e. lambda-calculi where beta-reduction is decomposed in various rules. These calculi, despite being non-terminating and non-orthogonal, have a key feature: each rule terminates when considered separately. It is well-known that the study of rewriting properties simplifies in presence of termination (e.g. confluence reduces to local confluence). This remark is exploited to develop an abstract theorem deducing factorization from some axioms on local diagrams. The axioms are simple and easy to check, in particular they do not mention residuals. The abstract theorem is then applied to some explicit substitution calculi related to Proof-Nets. We show how to recover standardization by levels, we model both call-by-name and call-by-value calculi and we characterize linear head reduction via a factorization theorem for a linear calculus of substitutions

    On the Semantics of Intensionality and Intensional Recursion

    Full text link
    Intensionality is a phenomenon that occurs in logic and computation. In the most general sense, a function is intensional if it operates at a level finer than (extensional) equality. This is a familiar setting for computer scientists, who often study different programs or processes that are interchangeable, i.e. extensionally equal, even though they are not implemented in the same way, so intensionally distinct. Concomitant with intensionality is the phenomenon of intensional recursion, which refers to the ability of a program to have access to its own code. In computability theory, intensional recursion is enabled by Kleene's Second Recursion Theorem. This thesis is concerned with the crafting of a logical toolkit through which these phenomena can be studied. Our main contribution is a framework in which mathematical and computational constructions can be considered either extensionally, i.e. as abstract values, or intensionally, i.e. as fine-grained descriptions of their construction. Once this is achieved, it may be used to analyse intensional recursion.Comment: DPhil thesis, Department of Computer Science & St John's College, University of Oxfor

    Normalizing the Taylor expansion of non-deterministic {\lambda}-terms, via parallel reduction of resource vectors

    Full text link
    It has been known since Ehrhard and Regnier's seminal work on the Taylor expansion of λ\lambda-terms that this operation commutes with normalization: the expansion of a λ\lambda-term is always normalizable and its normal form is the expansion of the B\"ohm tree of the term. We generalize this result to the non-uniform setting of the algebraic λ\lambda-calculus, i.e. λ\lambda-calculus extended with linear combinations of terms. This requires us to tackle two difficulties: foremost is the fact that Ehrhard and Regnier's techniques rely heavily on the uniform, deterministic nature of the ordinary λ\lambda-calculus, and thus cannot be adapted; second is the absence of any satisfactory generic extension of the notion of B\"ohm tree in presence of quantitative non-determinism, which is reflected by the fact that the Taylor expansion of an algebraic λ\lambda-term is not always normalizable. Our solution is to provide a fine grained study of the dynamics of β\beta-reduction under Taylor expansion, by introducing a notion of reduction on resource vectors, i.e. infinite linear combinations of resource λ\lambda-terms. The latter form the multilinear fragment of the differential λ\lambda-calculus, and resource vectors are the target of the Taylor expansion of λ\lambda-terms. We show the reduction of resource vectors contains the image of any β\beta-reduction step, from which we deduce that Taylor expansion and normalization commute on the nose. We moreover identify a class of algebraic λ\lambda-terms, encompassing both normalizable algebraic λ\lambda-terms and arbitrary ordinary λ\lambda-terms: the expansion of these is always normalizable, which guides the definition of a generalization of B\"ohm trees to this setting

    Extensional Taylor Expansion

    Full text link
    We introduce a calculus of extensional resource terms. These are resource terms \`a la Ehrhard-Regnier, but in infinite η\eta-long form, while retaining a finite syntax and dynamics: in particular, we prove strong confluence and normalization. Then we define an extensional version of Taylor expansion, mapping ordinary λ\lambda-terms to sets (or infinite linear combinations) of extensional resource terms: just like for ordinary Taylor expansion, the dynamics of our resource calculus allows to simulate the β\beta-reduction of λ\lambda-terms; the extensional nature of expansion shows in that we are also able to simulate η\eta-reduction. In a sense, extensional resource terms form a language of (non-necessarily normal) finite approximants of Nakajima trees, much like ordinary resource terms are approximants of B\"ohm-trees. Indeed, we show that the equivalence induced on λ\lambda-terms by the normalization of extensional Taylor-expansion is nothing but HH^*, the greatest consistent sensible λ\lambda-theory. Taylor expansion has profoundly renewed the approximation theory of the λ\lambda-calculus by providing a quantitative alternative to order-based approximation techniques, such as Scott continuity and B\"ohm trees. Extensional Taylor expansion enjoys similar advantages: e.g., to exhibit models of HH^*, it is now sufficient to provide a model of the extensional resource calculus. We apply this strategy to give a new, elementary proof of a result by Manzonetto: HH^* is the λ\lambda-theory induced by a well-chosen reflexive object in the relational model of the λ\lambda-calculus

    Addressing Machines as models of lambda-calculus

    Full text link
    Turing machines and register machines have been used for decades in theoretical computer science as abstract models of computation. Also the λ\lambda-calculus has played a central role in this domain as it allows to focus on the notion of functional computation, based on the substitution mechanism, while abstracting away from implementation details. The present article starts from the observation that the equivalence between these formalisms is based on the Church-Turing Thesis rather than an actual encoding of λ\lambda-terms into Turing (or register) machines. The reason is that these machines are not well-suited for modelling \lam-calculus programs. We study a class of abstract machines that we call \emph{addressing machine} since they are only able to manipulate memory addresses of other machines. The operations performed by these machines are very elementary: load an address in a register, apply a machine to another one via their addresses, and call the address of another machine. We endow addressing machines with an operational semantics based on leftmost reduction and study their behaviour. The set of addresses of these machines can be easily turned into a combinatory algebra. In order to obtain a model of the full untyped λ\lambda-calculus, we need to introduce a rule that bares similarities with the ω\omega-rule and the rule ζβ\zeta_\beta from combinatory logic
    corecore