78 research outputs found

    Coercive subtyping: Theory and implementation

    Get PDF
    International audienceCoercive subtyping is a useful and powerful framework of subtyping for type theories. The key idea of coercive subtyping is subtyping as abbreviation. In this paper, we give a new and adequate formulation of T[C], the system that extends a type theory T with coercive subtyping based on a set C of basic subtyping judgements, and show that coercive subtyping is a conservative extension and, in a more general sense, a definitional extension. We introduce an intermediate system, the star-calculus T[C]^@?, in which the positions that require coercion insertions are marked, and show that T[C]^@? is a conservative extension of T and that T[C]^@? is equivalent to T[C]. This makes clear what we mean by coercive subtyping being a conservative extension, on the one hand, and amends a technical problem that has led to a gap in the earlier conservativity proof, on the other. We also compare coercive subtyping with the 'ordinary' notion of subtyping - subsumptive subtyping, and show that the former is adequate for type theories with canonical objects while the latter is not. An improved implementation of coercive subtyping is done in the proof assistant Plastic

    Type-theoretical natural language semantics: on the system F for meaning assembly

    Get PDF
    International audienceThis paper presents and extends our type theoretical framework for a compositional treatment of natural language semantics with some lexical features like coercions (e.g. of a town into a football club) and copredication (e.g. on a town as a set of people and as a location). The second order typed lambda calculus was shown to be a good framework, and here we discuss how to introduced predefined types and coercive subtyping which are much more natural than internally coded similar constructs. Linguistic applications of these new features are also exemplified

    Coherence and transitivity in coercive subtyping

    Get PDF
    The aim of this thesis is to study coherence and transitivity in coercive subtyping. Among other things, coherence and transitivity are key aspects for a coercive subtyping system to be consistent and for it to be implemented in a correct way. The thesis consists of three major parts. First, I prove that, for the subtyping rules of some parameterised inductive data types, coherence holds and the normal transitivity rule is admissible. Second, the notion of weak transitivity is introduced. The subtyping rules of a large class of parameterised inductive data types are suitable for weak transitivity, but not compatible with the normal transitivity rule. Third, I present a new formulation of coercive subtyping in order to combine incoherent coercions for the type of dependent pairs. There are two subtyping relations in the system and hence a further understanding of coherence and transitivity is needed. This thesis has the first case study of combining incoherent coercions in a single system. The thesis provides a clearer understanding of the subtyping rules for parameterised inductive data types and explains why the normal transitivity rule is not admissible for some natural subtyping rules. It also demonstrates that coherence and transitivity at type level can sometimes be very difficult issues in coercive subtyping. Besides providing theoretical understanding, the thesis also gives algorithms for implementing the subtyping rules for parameterised inductive data types

    A theory of qualified types

    Get PDF
    AbstractThis paper describes a general theory of overloading based on a system of qualified types. The central idea is the use of predicates in the type of a term, restricting the scope of universal quantification. A corresponding semantic notion of evidence is introduced and provides a uniform framework for implementing applications of this system, including Haskell style type classes, extensible records and subtyping.Working with qualified types in a simple, implicitly typed, functional language, we extend the Damas-Milner approach to type inference. As a result, we show that the set of all possible typings for a given term can be characterized by a principal type scheme, calculated by a type inference algorithm

    Type Inference with Bounded Quantification

    Get PDF
    In this thesis we study some of the problems which occur when type inference is used in a type system with subtyping. An underlying poset of atomic types is used as a basis for our subtyping systems. We argue that the class of Helly posets is of significant interest, as it includes lattices and trees, and is closed under type formation not only with structural constructors such as function space and list, but also records, tagged variants, Abadi-Cardelli object constructors, top and bottom. We develop a general theory relating consistency, solvability, and solution of sets of constraints between regular types built over Helly posets with these constructors, and introduce semantic notions of simplification and entailment for sets of constraints over Helly posets of base types. We extend Helly posets with inequalities of the form a <= tau, where tau is not necessarily atomic, and show how this enables us to deal with bounded quantification. Using bounded quantification we define a subtyping system which combines structural subtype polymorphism and predicative parametric polymorphism, and use this to extend with subtyping the type system of Laufer and Odersky for ML with type annotations. We define a complete algorithm which infers minimal types for our extension, using factorisations, solutions of subtyping problems analogous to principal unifiers for unification problems. We give some examples of typings computed by a prototype implementation

    Refinement Types for Logical Frameworks and Their Interpretation as Proof Irrelevance

    Full text link
    Refinement types sharpen systems of simple and dependent types by offering expressive means to more precisely classify well-typed terms. We present a system of refinement types for LF in the style of recent formulations where only canonical forms are well-typed. Both the usual LF rules and the rules for type refinements are bidirectional, leading to a straightforward proof of decidability of typechecking even in the presence of intersection types. Because we insist on canonical forms, structural rules for subtyping can now be derived rather than being assumed as primitive. We illustrate the expressive power of our system with examples and validate its design by demonstrating a precise correspondence with traditional presentations of subtyping. Proof irrelevance provides a mechanism for selectively hiding the identities of terms in type theories. We show that LF refinement types can be interpreted as predicates using proof irrelevance, establishing a uniform relationship between two previously studied concepts in type theory. The interpretation and its correctness proof are surprisingly complex, lending support to the claim that refinement types are a fundamental construct rather than just a convenient surface syntax for certain uses of proof irrelevance

    Subtypes and bounded quantification from a fibred perspective

    Get PDF
    AbstractA general categorical description of subtyping σ < σâ€Č and of bounded quantification ∀α<: σ.τ and ∃α <: σ.τ is presented in terms of fibrations. In fact, we shall generalize these bounded quantifiers to “constrained quantifiers” ∀α[σ <: σâ€Č].τ and ∃α[σ <: σâ€Č].τ. In these cases one quantifies over those type variables α for which σ(α) <: σâ€Č(α) holds. Semantically we distinguish three levels: types τ, which are fibered over (depend on) subtypings σ <: σâ€Č, which in turn are fibred over (depend on) kinds K. In this setting we can describe constrained quantification ∀α[σ <: σâ€Č]. (−) and ∃α[σ <: σâ€Č]. (−) as right and left adjoints to the weakening functor which adds the (dummy) hypothesis σ <: σâ€Č to an appropriate context. This shows that, like ordinary quantifiers, these constrained (and hence especially bounded) quantifiers are adjoints

    The Montagovian Generative Lexicon ΛT yn: a Type Theoretical Framework for Natural Language Semantics

    Get PDF
    International audienceWe present a framework, named the Montagovian generative lexicon, for computing the semantics of natural language sentences, expressed in many-sorted higher order logic. Word meaning is described by several lambda terms of second order lambda calculus (Girard’s system F): the principal lambda term encodes the argument structure, while the other lambda terms implement meaning transfers. The base types include a type for propositions and many types for sorts of a many-sorted logic for expressing restriction of selection. This framework is able to integrate a proper treatment of lexical phenomena into a Montagovian compositional semantics, like the (im)possible arguments of a predicate, and the adaptation of a word meaning to some contexts. Among these adaptations of a word meaning to contexts, ontological inclusions are handled by coercive subtyping, an extension of system F introduced in the present paper. The benefits of this framework for lexical semantics and pragmatics are illustrated on meaning transfers and coercions, on possible and impossible copredication over different senses, on deverbal ambiguities, and on “fictive motion”. Next we show that the compositional treatment of determiners, quantifiers, plurals, and other semantic phenomena is richer in our framework. We then conclude with the linguistic, logical and computational perspectives opened by the Montagovian generative lexicon

    Safe zero-cost coercions for Haskell

    Get PDF
    Generative type abstractions – present in Haskell, OCaml, and other languages – are useful concepts to help prevent programmer errors. They serve to create new types that are distinct at compile time but share a run-time representation with some base type. We present a new mechanism that allows for zero-cost conversions between generative type abstractions and their representations, even when such types are deeply nested. We prove type safety in the presence of these conversions and have implemented our work in GHC

    Safe zero-cost coercions for Haskell

    Get PDF
    Generative type abstractions – present in Haskell, OCaml, and other languages – are useful concepts to help prevent programmer errors. They serve to create new types that are distinct at compile time but share a run-time representation with some base type. We present a new mechanism that allows for zero-cost conversions between generative type abstractions and their representations, even when such types are deeply nested. We prove type safety in the presence of these conversions and have implemented our work in GHC
    • 

    corecore