14 research outputs found

    Inhabitation for Non-idempotent Intersection Types

    Full text link
    The inhabitation problem for intersection types in the lambda-calculus is known to be undecidable. We study the problem in the case of non-idempotent intersection, considering several type assignment systems, which characterize the solvable or the strongly normalizing lambda-terms. We prove the decidability of the inhabitation problem for all the systems considered, by providing sound and complete inhabitation algorithms for them

    Strong Normalization through Intersection Types and Memory

    Get PDF
    AbstractWe characterize β-strongly normalizing λ-terms by means of a non-idempotent intersection type system. More precisely, we first define a memory calculus K together with a non-idempotent intersection type system K, and we show that a K-term t is typable in K if and only if t is K-strongly normalizing. We then show that β-strong normalization is equivalent to K-strong normalization. We conclude since λ-terms are strictly included in K-terms

    The Vectorial λ\lambda-Calculus

    Full text link
    We describe a type system for the linear-algebraic λ\lambda-calculus. The type system accounts for the linear-algebraic aspects of this extension of λ\lambda-calculus: it is able to statically describe the linear combinations of terms that will be obtained when reducing the programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We prove that the resulting typed λ\lambda-calculus is strongly normalising and features weak subject reduction. Finally, we show how to naturally encode matrices and vectors in this typed calculus.Comment: Long and corrected version of arXiv:1012.4032 (EPTCS 88:1-15), to appear in Information and Computatio

    A Quantitative Version of Simple Types

    Get PDF
    This work introduces a quantitative version of the simple type assignment system, starting from a suitable restriction of non-idempotent intersection types. The resulting system is decidable and has the same typability power as the simple type system; thus, assigning types to terms supplies the very same qualitative information given by simple types, but at the same time can provide some interesting quantitative information. It is well known that typability for simple types is equivalent to unification; we prove a similar result for the newly introduced system. More precisely, we show that typability is equivalent to a unification problem which is a non-trivial extension of the classical one: in addition to unification rules, our typing algorithm makes use of an expansion operation that increases the cardinality of multisets whenever needed

    Inhabitation for non-idempotent intersection types

    Get PDF

    A Quantitative Understanding of Pattern Matching

    Get PDF
    This paper shows that the recent approach to quantitative typing systems for programming languages can be extended to pattern matching features. Indeed, we define two resource-aware type systems, named U and E, for a ?-calculus equipped with pairs for both patterns and terms. Our typing systems borrow some basic ideas from [Antonio Bucciarelli et al., 2015], which characterises (head) normalisation in a qualitative way, in the sense that typability and normalisation coincide. But, in contrast to [Antonio Bucciarelli et al., 2015], our systems also provide quantitative information about the dynamics of the calculus. Indeed, system U provides upper bounds for the length of (head) normalisation sequences plus the size of their corresponding normal forms, while system E, which can be seen as a refinement of system U, produces exact bounds for each of them. This is achieved by means of a non-idempotent intersection type system equipped with different technical tools. First of all, we use product types to type pairs instead of the disjoint unions in [Antonio Bucciarelli et al., 2015], which turn out to be an essential quantitative tool because they remove the confusion between "being a pair" and "being duplicable". Secondly, typing sequents in system E are decorated with tuples of integers, which provide quantitative information about normalisation sequences, notably time (cf. length) and space (cf. size). Moreover, the time resource information is remarkably refined, because it discriminates between different kinds of reduction steps performed during evaluation, so that beta, substitution and matching steps are counted separately. Another key tool of system E is that the type system distinguishes between consuming (contributing to time) and persistent (contributing to space) constructors

    The Bang Calculus Revisited

    Full text link
    Call-by-Push-Value (CBPV) is a programming paradigm subsuming both Call-by-Name (CBN) and Call-by-Value (CBV) semantics. The paradigm was recently modelled by means of the Bang Calculus, a term language connecting CBPV and Linear Logic. This paper presents a revisited version of the Bang Calculus, called λ!\lambda !, enjoying some important properties missing in the original system. Indeed, the new calculus integrates commutative conversions to unblock value redexes while being confluent at the same time. A second contribution is related to non-idempotent types. We provide a quantitative type system for our λ!\lambda !-calculus, and we show that the length of the (weak) reduction of a typed term to its normal form \emph{plus} the size of this normal form is bounded by the size of its type derivation. We also explore the properties of this type system with respect to CBN/CBV translations. We keep the original CBN translation from λ\lambda-calculus to the Bang Calculus, which preserves normal forms and is sound and complete with respect to the (quantitative) type system for CBN. However, in the case of CBV, we reformulate both the translation and the type system to restore two main properties: preservation of normal forms and completeness. Last but not least, the quantitative system is refined to a \emph{tight} one, which transforms the previous upper bound on the length of reduction to normal form plus its size into two independent \emph{exact} measures for them

    Node Replication: Theory And Practice

    Get PDF
    We define and study a term calculus implementing higher-order node replication. It is used to specify two different (weak) evaluation strategies: call-by-name and fully lazy call-by-need, that are shown to be observationally equivalent by using type theoretical technical tools.Comment: 64 pages, submitted to LMC
    corecore