333,930 research outputs found

    Type and Behaviour Reconstruction for Higher-Order Concurrent Programs

    Get PDF
    In this paper we develop a sound and complete type and behaviour inference algorithm for a fragment of CML (Standard ML with primitives for concurrency). Behaviours resemble terms of a process algebra and yield a concise representation of the communications taking place during execution; types are mostly as usual except that function types and ``delayed communication types'' are labelled by behaviours expressing the communications that will take place if the function is applied or the delayed action is activated. The development of the present paper improves a previously published algorithm in achieving completeness as well as soundness; this is due to an alternative strategy for generalising over types and behaviours

    Complete and easy type Inference for first-class polymorphism

    Get PDF
    The Hindley-Milner (HM) typing discipline is remarkable in that it allows statically typing programs without requiring the programmer to annotate programs with types themselves. This is due to the HM system offering complete type inference, meaning that if a program is well typed, the inference algorithm is able to determine all the necessary typing information. Let bindings implicitly perform generalisation, allowing a let-bound variable to receive the most general possible type, which in turn may be instantiated appropriately at each of the variable’s use sites. As a result, the HM type system has since become the foundation for type inference in programming languages such as Haskell as well as the ML family of languages and has been extended in a multitude of ways. The original HM system only supports prenex polymorphism, where type variables are universally quantified only at the outermost level. This precludes many useful programs, such as passing a data structure to a function in the form of a fold function, which would need to be polymorphic in the type of the accumulator. However, this would require a nested quantifier in the type of the overall function. As a result, one direction of extending the HM system is to add support for first-class polymorphism, allowing arbitrarily nested quantifiers and instantiating type variables with polymorphic types. In such systems, restrictions are necessary to retain decidability of type inference. This work presents FreezeML, a novel approach for integrating first-class polymorphism into the HM system, focused on simplicity. It eschews sophisticated yet hard to grasp heuristics in the type systems or extending the language of types, while still requiring only modest amounts of annotations. In particular, FreezeML leverages the mechanisms for generalisation and instantiation that are already at the heart of ML. Generalisation and instantiation are performed by let bindings and variables, respectively, but extended to types beyond prenex polymorphism. The defining feature of FreezeML is the ability to freeze variables, which prevents the usual instantiation of their types, allowing them instead to keep their original, fully polymorphic types. We demonstrate that FreezeML is as expressive as System F by providing a translation from the latter to the former; the reverse direction is also shown. Further, we prove that FreezeML is indeed a conservative extension of ML: When considering only ML programs, FreezeML accepts exactly the same programs as ML itself. # We show that type inference for FreezeML can easily be integrated into HM-like type systems by presenting a sound and complete inference algorithm for FreezeML that extends Algorithm W, the original inference algorithm for the HM system. Since the inception of Algorithm W in the 1970s, type inference for the HM system and its descendants has been modernised by approaches that involve constraint solving, which proved to be more modular and extensible. In such systems, a term is translated to a logical constraint, whose solutions correspond to the types of the original term. A solver for such constraints may then be defined independently. To this end, we demonstrate such a constraint-based inference approach for FreezeML. We also discuss the effects of integrating the value restriction into FreezeML and provide detailed comparisons with other approaches towards first-class polymorphism in ML alongside a collection of examples found in the literature

    Polymorphic set-theoretic types for functional languages

    Get PDF
    We study set-theoretic types: types that include union, intersection, and negation connectives. Set-theoretic types, coupled with a suitable subtyping relation, are useful to type several programming language constructs \u2013 including conditional branching, pattern matching, and function overloading \u2013 very precisely. We define subtyping following the semantic subtyping approach, which interprets types as sets and defines subtyping as set inclusion. Our set-theoretic types are polymorphic, that is, they contain type variables to allow parametric polymorphism. We extend previous work on set-theoretic types and semantic subtyping by showing how to adapt them to new settings and apply them to type various features of functional languages. More precisely, we integrate semantic subtyping with three important language features. In Part I we study implicitly typed languages with let-polymorphism and type inference (previous work on semantic subtyping focused on explicitly typed languages). We describe an implicitly typed lambda-calculus and a declarative type system for which we prove soundness. We study type inference and prove results of soundness and completeness. Then, we show how to make type inference more precise when programs are partially annotated with types. In Part II we study gradual typing. We describe a new approach to add gradual typing to a static type system; the novelty is that we give a declarative presentation of the type system, while previous work considered algorithmic presentations. We first illustrate the approach on a Hindley-Milner type system without subtyping. We describe declarative typing, compilation to a cast language, and sound and complete type inference. Then, we add set-theoretic types, defining a subtyping relation on set-theoretic gradual types, and we describe sound type inference for the extended system. In Part III we consider non-strict semantics. The existing semantic subtyping systems are designed for call-by-value languages and are unsound for non-strict semantics. We adapt them to obtain soundness for call-by-need. To do so, we introduce an explicit representation for divergence in the types, allowing the type system to distinguish the expressions that are already evaluated from those that are computations which might diverge.Cette th\ue8se porte sur l'\ue9tude des types ensemblistes : des types qui contiennent des connecteurs d'union, d'intersection et de n\ue9gation. Les types ensemblistes permettent de typer de mani\ue8re tr\ue8s pr\ue9cise plusieurs constructions des langages de programmation (comme par exemple les branches conditionnelles, le filtrage par motif et la surcharge des fonctions) lorsqu'ils sont utilis\ue9s avec une notion appropri\ue9e de sous-typage. Pour d\ue9finir celle-ci, nous utilisons l'approche du sous-typage s\ue9mantique, dans laquelle les types sont interpr\ue9t\ue9s comme des ensembles, et o\uf9 le sous-typage est d\ue9fini comme l'inclusion ensembliste. Dans la plupart de cette th\ue8se, les types ensemblistes sont polymorphes, dans le sens o\uf9 ils contiennent des variables de type pour permettre le polymorphisme param\ue9trique. La th\ue8se \ue9tend les travaux pr\ue9c\ue9dents sur les types ensemblistes et le sous-typage s\ue9mantique en montrant comment les adapter \ue0 des nouveaux contextes et comment les utiliser pour typer plusieurs aspects des langages fonctionnels. Elle se compose de trois parties. La premi\ue8re partie porte sur une \ue9tude des langages typ\ue9s de mani\ue8re implicite avec polymorphisme du "let" et inf\ue9rence de types (contrairement aux travaux pr\ue9c\ue9dents sur le sous-typage s\ue9mantique qui \ue9tudiaient des langages typ\ue9s explicitement). Nous y d\ue9crivons un lambda-calcul typ\ue9 implicitement avec un syst\ue8me de types dont nous d\ue9montrons la correction. De m\ueame, nous y \ue9tudions l'inf\ue9rence de types dont nous d\ue9montrons la correction et la compl\ue9tude. Enfin, nous montrons comment rendre l'inf\ue9rence plus pr\ue9cise quand les programmes sont partiellement annot\ue9s avec des types. La deuxi\ue8me partie d\ue9crit une nouvelle approche permettant d'\ue9tendre un syst\ue8me de types statique avec du typage graduel; l'originalit\ue9 venant du fait que nous d\ue9crivons le syst\ue8me de types de fa\ue7on d\ue9clarative, lorsque les syst\ue8mes existants proposent des descriptions algorithmiques. Nous illustrons cette approche en ajoutant le typage graduel \ue0 un syst\ue8me de types \ue0 la Hindley-Milner sans sous-typage. Nous d\ue9crivons pour cela un syst\ue8me de types d\ue9claratif, un processus de compilation vers un langage avec v\ue9rifications de type dynamiques (ou "casts"), et nous pr\ue9sentons un syst\ue8me d'inf\ue9rence de types correct et complet. Ensuite, nous y ajoutons les types ensemblistes, en d\ue9finissant une relation de sous-typage sur les types graduel ensemblistes, puis en pr\ue9sentant un syst\ue8me d'inf\ue9rence de types correct pour le syst\ue8me \ue9tendu. La troisi\ue8me partie porte sur l'\ue9tude des s\ue9mantiques non-strictes. Les syst\ue8mes existants qui utilisent le sous-typage s\ue9mantique ont \ue9t\ue9 d\ue9velopp\ue9s pour des langages avec appel par valeur et ne sont pas s\ufbrs pour des s\ue9mantiques non-strictes. Nous montrons ici comment les adapter pour garantir leur s\ufbret\ue9 en appel par n\ue9cessit\ue9. Pour faire \ue7a, nous introduisons dans les types une repr\ue9sentation explicite de la divergence, afin que le syst\ue8me des types puisse distinguer les expressions qui ne demandent pas d'\ue9valuation de celles qui la demandent et pourraient ainsi diverger

    Gradual Liquid Type Inference

    Full text link
    Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inference---a novel combination of liquid inference and gradual refinement types---addresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process e effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a nite set of predicates, in gradual liquid type inference we can enumerate the safe concretizations of each imprecise refinement, i.e. the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid Haskell.Comment: To appear at OOPSLA 201

    Hoogle?: Constants and ?-abstractions in Petri-net-based Synthesis using Symbolic Execution

    Get PDF
    Type-directed component-based program synthesis is the task of automatically building a function with applications of available components and whose type matches a given goal type. Existing approaches to component-based synthesis, based on classical proof search, cannot deal with large sets of components. Recently, Hoogle+, a component-based synthesizer for Haskell, overcomes this issue by reducing the search problem to a Petri-net reachability problem. However, Hoogle+ cannot synthesize constants nor ?-abstractions, which limits the problems that it can solve. We present Hoogle?, an extension to Hoogle+ that brings constants and ?-abstractions to the search space, in two independent steps. First, we introduce the notion of wildcard component, a component that matches all types. This enables the algorithm to produce incomplete functions, i.e., functions containing occurrences of the wildcard component. Second, we complete those functions, by replacing each occurrence with constants or custom-defined ?-abstractions. We have chosen to find constants by means of an inference algorithm: we present a new unification algorithm based on symbolic execution that uses the input-output examples supplied by the user to compute substitutions for the occurrences of the wildcard. When compared to Hoogle+, Hoogle? can solve more kinds of problems, especially problems that require the generation of constants and ?-abstractions, without performance degradation

    Entropic Inference: some pitfalls and paradoxes we can avoid

    Full text link
    The method of maximum entropy has been very successful but there are cases where it has either failed or led to paradoxes that have cast doubt on its general legitimacy. My more optimistic assessment is that such failures and paradoxes provide us with valuable learning opportunities to sharpen our skills in the proper way to deploy entropic methods. The central theme of this paper revolves around the different ways in which constraints are used to capture the information that is relevant to a problem. This leads us to focus on four epistemically different types of constraints. I propose that the failure to recognize the distinctions between them is a prime source of errors. I explicitly discuss two examples. One concerns the dangers involved in replacing expected values with sample averages. The other revolves around misunderstanding ignorance. I discuss the Friedman-Shimony paradox as it is manifested in the three-sided die problem and also in its original thermodynamic formulation.Comment: 14 pages, 1 figure. Invited paper presented at MaxEnt 2012, The 32nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, (July 15--20, 2012, Garching, Germany

    Koka: Programming with Row Polymorphic Effect Types

    Full text link
    We propose a programming model where effects are treated in a disciplined way, and where the potential side-effects of a function are apparent in its type signature. The type and effect of expressions can also be inferred automatically, and we describe a polymorphic type inference system based on Hindley-Milner style inference. A novel feature is that we support polymorphic effects through row-polymorphism using duplicate labels. Moreover, we show that our effects are not just syntactic labels but have a deep semantic connection to the program. For example, if an expression can be typed without an exn effect, then it will never throw an unhandled exception. Similar to Haskell's `runST` we show how we can safely encapsulate stateful operations. Through the state effect, we can also safely combine state with let-polymorphism without needing either imperative type variables or a syntactic value restriction. Finally, our system is implemented fully in a new language called Koka and has been used successfully on various small to medium-sized sample programs ranging from a Markdown processor to a tier-splitted chat application. You can try out Koka live at www.rise4fun.com/koka/tutorial.Comment: In Proceedings MSFP 2014, arXiv:1406.153
    • …
    corecore