454 research outputs found

    Koka: Programming with Row Polymorphic Effect Types

    Full text link
    We propose a programming model where effects are treated in a disciplined way, and where the potential side-effects of a function are apparent in its type signature. The type and effect of expressions can also be inferred automatically, and we describe a polymorphic type inference system based on Hindley-Milner style inference. A novel feature is that we support polymorphic effects through row-polymorphism using duplicate labels. Moreover, we show that our effects are not just syntactic labels but have a deep semantic connection to the program. For example, if an expression can be typed without an exn effect, then it will never throw an unhandled exception. Similar to Haskell's `runST` we show how we can safely encapsulate stateful operations. Through the state effect, we can also safely combine state with let-polymorphism without needing either imperative type variables or a syntactic value restriction. Finally, our system is implemented fully in a new language called Koka and has been used successfully on various small to medium-sized sample programs ranging from a Markdown processor to a tier-splitted chat application. You can try out Koka live at www.rise4fun.com/koka/tutorial.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    Type Classes for Lightweight Substructural Types

    Full text link
    Linear and substructural types are powerful tools, but adding them to standard functional programming languages often means introducing extra annotations and typing machinery. We propose a lightweight substructural type system design that recasts the structural rules of weakening and contraction as type classes; we demonstrate this design in a prototype language, Clamp. Clamp supports polymorphic substructural types as well as an expressive system of mutable references. At the same time, it adds little additional overhead to a standard Damas-Hindley-Milner type system enriched with type classes. We have established type safety for the core model and implemented a type checker with type inference in Haskell.Comment: In Proceedings LINEARITY 2014, arXiv:1502.0441

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Reducing the Cost of Precise Types

    Get PDF
    Programs involving precise types enforce more properties via type checking, but precise types also prevent the reuse of functions throughout a program since no single precise type is used throughout a large program. My work is a step toward eliminating the underlying dilemma regarding type precision versus function reuse. It culminates in a novel traversal operator that recovers the reuse by automating most of each conversion between "similar" precise types, for a notion of similarity that I characterize in both the intuitive and technical senses. The benefits of my techniques are clear in side-by-side comparisons; in particular, I apply my techniques to two definitions of lambda-lifting. I present and implement my techniques in the Haskell programming language, but the fundamental ideas are applicable to any statically- and strongly-typed programming functional language with algebraic data types

    Fix Your Types

    Full text link
    When using existing ACL2 datatype frameworks, many theorems require type hypotheses. These hypotheses slow down the theorem prover, are tedious to write, and are easy to forget. We describe a principled approach to types that provides strong type safety and execution efficiency while avoiding type hypotheses, and we present a library that automates this approach. Using this approach, types help you catch programming errors and then get out of the way of theorem proving.Comment: In Proceedings ACL2 2015, arXiv:1509.0552

    A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization (Artifact)

    Get PDF
    This artifact contains the accompanying code for the ECOOP 2015 paper: "A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization". It contains source files for a full mechanization of the three languages presented in the paper: CAMP (Calculus for Aggregating Matching Patterns), NRA (Nested Relational Algebra) and NNRC (Named Nested Relational Calculus). Translations between all three languages and their attendant proofs of correctness are included. Additionally, a mechanization of a type system for the main languages is provided, along with bidirectional proofs of type preservation and proofs of the time complexity of the various compilers

    Essential Incompleteness of Arithmetic Verified by Coq

    Get PDF
    A constructive proof of the Goedel-Rosser incompleteness theorem has been completed using the Coq proof assistant. Some theory of classical first-order logic over an arbitrary language is formalized. A development of primitive recursive functions is given, and all primitive recursive functions are proved to be representable in a weak axiom system. Formulas and proofs are encoded as natural numbers, and functions operating on these codes are proved to be primitive recursive. The weak axiom system is proved to be essentially incomplete. In particular, Peano arithmetic is proved to be consistent in Coq's type theory and therefore is incomplete.Comment: This paper is part of the proceedings of the 18th International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2005). For the associated Coq source files see the TeX sources, or see <http://r6.ca/Goedel20050512.tar.gz
    • …
    corecore