120 research outputs found

    Relational type-checking for MELL proof-structures. Part 1: Multiplicatives

    Get PDF
    Relational semantics for linear logic is a form of non-idempotent intersection type system, from which several informations on the execution of a proof-structure can be recovered. An element of the relational interpretation of a proof-structure R with conclusion Γ\Gamma acts thus as a type (refining Γ\Gamma) having R as an inhabitant. We are interested in the following type-checking question: given a proof-structure R, a list of formulae Γ\Gamma, and a point x in the relational interpretation of Γ\Gamma, is x in the interpretation of R? This question is decidable. We present here an algorithm that decides it in time linear in the size of R, if R is a proof-structure in the multiplicative fragment of linear logic. This algorithm can be extended to larger fragments of multiplicative-exponential linear logic containing λ\lambda-calculus

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Certifying Higher-Order Polynomial Interpretations

    Get PDF
    Higher-order rewriting is a framework in which one can write higher-order programs and study their properties. One such property is termination: the situation that for all inputs, the program eventually halts its execution and produces an output. Several tools have been developed to check whether higher-order rewriting systems are terminating. However, developing such tools is difficult and can be error-prone. In this paper, we present a way of certifying termination proofs of higher-order term rewriting systems. We formalize a specific method that is used to prove termination, namely the polynomial interpretation method. In addition, we give a program that processes proof traces containing a high-level description of a termination proof into a formal Coq proof script that can be checked by Coq. We demonstrate the usability of this approach by certifying higher-order polynomial interpretation proofs produced by Wanda, a termination analysis tool for higher-order rewriting

    A deep quantitative type system

    Get PDF
    We investigate intersection types and resource lambda-calculus in deep-inference proof theory. We give a unified type system that is parametric in various aspects: it encompasses resource calculi, intersection-typed lambda-calculus, and simply-typed lambda-calculus; it accommodates both idempotence and non-idempotence; it characterizes strong and weak normalization; and it does so while allowing a range of algebraic laws to determine reduction behaviour, for various quantitative effects. We give a parametric resource calculus with explicit sharing, the “collection calculus”, as a Curry–Howard interpretation of the type system, that embodies these computational properties

    A deep quantitative type system

    Get PDF
    We investigate intersection types and resource lambda-calculus in deep-inference proof theory. We give a unified type system that is parametric in various aspects: it encompasses resource calculi, intersection-typed lambda-calculus, and simply-typed lambda-calculus; it accommodates both idempotence and non-idempotence; it characterizes strong and weak normalization; and it does so while allowing a range of algebraic laws to determine reduction behaviour, for various quantitative effects. We give a parametric resource calculus with explicit sharing, the "collection calculus", as a Curry-Howard interpretation of the type system, that embodies these computational properties

    Revisiting mu-puzzle. A case study in finite countermodels verification

    Get PDF
    © 2018, Springer Nature Switzerland AG. In this paper we consider well-known MU puzzle from Goedel, Escher, Bach: An Eternal Golden Braid book by D. Hofstadter, as an infinite state safety verification problem for string rewriting systems. We demonstrate fully automated solution using finite countermodels method (FCM). We highlight advantages of FCM method and compare it with alternatives methods using regular invariants

    Formalizing Functions as Processes

    Get PDF
    We present the first formalization of Milner’s classic translation of the λ-calculus into the π-calculus. It is a challenging result with respect to variables, names, and binders, as it requires one to relate variables and binders of the λ-calculus with names and binders in the π-calculus. We formalize it in Abella, merging the set of variables and the set of names, thus circumventing the challenge and obtaining a neat formalization. About the translation, we follow Accattoli’s factoring of Milner’s result via the linear substitution calculus, which is a λ-calculus with explicit substitutions and contextual rewriting rules, mediating between the λ-calculus and the π-calculus. Another aim of the formalization is to investigate to which extent the use of contexts in Accattoli’s refinement can be formalized

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of logn\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time
    corecore