754 research outputs found

    Scoped and typed staging by evaluation

    Get PDF
    Using a dependently typed host language, we give a well scoped-and-typed by construction presentation of a minimal two level simply typed calculus with a static and a dynamic stage. The staging function partially evaluating the parts of a term that are static is obtained by a model construction inspired by normalisation by evaluation. We then go on to demonstrate how this minimal language can be extended to provide additional metaprogramming capabilities, and to define a higher order functional language evaluating to digital circuit descriptions

    Complete and easy type Inference for first-class polymorphism

    Get PDF
    The Hindley-Milner (HM) typing discipline is remarkable in that it allows statically typing programs without requiring the programmer to annotate programs with types themselves. This is due to the HM system offering complete type inference, meaning that if a program is well typed, the inference algorithm is able to determine all the necessary typing information. Let bindings implicitly perform generalisation, allowing a let-bound variable to receive the most general possible type, which in turn may be instantiated appropriately at each of the variable’s use sites. As a result, the HM type system has since become the foundation for type inference in programming languages such as Haskell as well as the ML family of languages and has been extended in a multitude of ways. The original HM system only supports prenex polymorphism, where type variables are universally quantified only at the outermost level. This precludes many useful programs, such as passing a data structure to a function in the form of a fold function, which would need to be polymorphic in the type of the accumulator. However, this would require a nested quantifier in the type of the overall function. As a result, one direction of extending the HM system is to add support for first-class polymorphism, allowing arbitrarily nested quantifiers and instantiating type variables with polymorphic types. In such systems, restrictions are necessary to retain decidability of type inference. This work presents FreezeML, a novel approach for integrating first-class polymorphism into the HM system, focused on simplicity. It eschews sophisticated yet hard to grasp heuristics in the type systems or extending the language of types, while still requiring only modest amounts of annotations. In particular, FreezeML leverages the mechanisms for generalisation and instantiation that are already at the heart of ML. Generalisation and instantiation are performed by let bindings and variables, respectively, but extended to types beyond prenex polymorphism. The defining feature of FreezeML is the ability to freeze variables, which prevents the usual instantiation of their types, allowing them instead to keep their original, fully polymorphic types. We demonstrate that FreezeML is as expressive as System F by providing a translation from the latter to the former; the reverse direction is also shown. Further, we prove that FreezeML is indeed a conservative extension of ML: When considering only ML programs, FreezeML accepts exactly the same programs as ML itself. # We show that type inference for FreezeML can easily be integrated into HM-like type systems by presenting a sound and complete inference algorithm for FreezeML that extends Algorithm W, the original inference algorithm for the HM system. Since the inception of Algorithm W in the 1970s, type inference for the HM system and its descendants has been modernised by approaches that involve constraint solving, which proved to be more modular and extensible. In such systems, a term is translated to a logical constraint, whose solutions correspond to the types of the original term. A solver for such constraints may then be defined independently. To this end, we demonstrate such a constraint-based inference approach for FreezeML. We also discuss the effects of integrating the value restriction into FreezeML and provide detailed comparisons with other approaches towards first-class polymorphism in ML alongside a collection of examples found in the literature

    VeriFx: Correct Replicated Data Types for the Masses

    Get PDF
    Distributed systems adopt weak consistency to ensure high availability and low latency, but state convergence is hard to guarantee due to conflicts. Experts carefully design replicated data types (RDTs) that resemble sequential data types and embed conflict resolution mechanisms that ensure convergence. Designing RDTs is challenging as their correctness depends on subtleties such as the ordering of concurrent operations. Currently, researchers manually verify RDTs, either by paper proofs or using proof assistants. Unfortunately, paper proofs are subject to reasoning flaws and mechanized proofs verify a formalization instead of a real-world implementation. Furthermore, writing mechanized proofs is reserved for verification experts and is extremely time-consuming. To simplify the design, implementation, and verification of RDTs, we propose VeriFx, a specialized programming language for RDTs with automated proof capabilities. VeriFx lets programmers implement RDTs atop functional collections and express correctness properties that are verified automatically. Verified RDTs can be transpiled to mainstream languages (currently Scala and JavaScript). VeriFx provides libraries for implementing and verifying Conflict-free Replicated Data Types (CRDTs) and Operational Transformation (OT) functions. These libraries implement the general execution model of those approaches and define their correctness properties. We use the libraries to implement and verify an extensive portfolio of 51 CRDTs, 16 of which are used in industrial databases, and reproduce a study on the correctness of OT functions

    Revisiting Language Support for Generic Programming: When Genericity Is a Core Design Goal

    Get PDF
    ContextGeneric programming, as defined by Stepanov, is a methodology for writing efficient and reusable algorithms by considering only the required properties of their underlying data types and operations. Generic programming has proven to be an effective means of constructing libraries of reusable software components in languages that support it. Generics-related language design choices play a major role in how conducive generic programming is in practice.InquirySeveral mainstream programming languages (e.g. Java and C++) were first created without generics; features to support generic programming were added later, gradually. Much of the existing literature on supporting generic programming focuses thus on retrofitting generic programming into existing languages and identifying related implementation challenges. Is the programming experience significantly better, or different when programming with a language designed for generic programming without limitations from prior language design choices?ApproachWe examine Magnolia, a language designed to embody generic programming. Magnolia is representative of an approach to language design rooted in algebraic specifications. We repeat a well-known experiment, where we put Magnolia’s generic programming facilities under scrutiny by implementing a subset of the Boost Graph Library, and reflect on our development experience.KnowledgeWe discover that the idioms identified as key features for supporting Stepanov-style generic programming in the previous studies and work on the topic do not tell a full story. We clarify which of them are more of a means to an end, rather than fundamental features for supporting generic programming. Based on the development experience with Magnolia, we identify variadics as an additional key feature for generic programming and point out limitations and challenges of genericity by property.GroundingOur work uses a well-known framework for evaluating the generic programming facilities of a language from the literature to evaluate the algebraic approach through Magnolia, and we draw comparisons with well-known programming languages.ImportanceThis work gives a fresh perspective on generic programming, and clarifies what are fundamental language properties and their trade-offs when considering supporting Stepanov-style generic programming. The understanding of how to set the ground for generic programming will inform future language design.</p

    Definitional Functoriality for Dependent (Sub)Types

    Full text link
    Dependently-typed proof assistant rely crucially on definitional equality, which relates types and terms that are automatically identified in the underlying type theory. This paper extends type theory with definitional functor laws, equations satisfied propositionally by a large class of container-like type constructors F:TypeTypeF : \operatorname{Type} \to \operatorname{Type}, equipped with a mapF:(AB)F AF B\operatorname{map}_{F} : (A \to B) \to F\ A \to F\ B, such as lists or trees. Promoting these equations to definitional ones strengthen the theory, enabling slicker proofs and more automation for functorial type constructors. This extension is used to modularly justify a structural form of coercive subtyping, propagating subtyping through type formers in a map-like fashion. We show that the resulting notion of coercive subtyping, thanks to the extra definitional equations, is equivalent to a natural and implicit form of subsumptive subtyping. The key result of decidability of type-checking in a dependent type system with functor laws for lists has been entirely mechanized in Coq

    Improvements to Many-Sorted Finite Model Finding using SMT Solvers

    Get PDF
    Formal modeling is a powerful tool in requirements engineering. By modeling a system before implementation, one can discover bugs before they appear in testing or production. Model finding (or instance finding) for a model written in first-order logic is the problem of finding a mapping of variables to values that satisfies a model's specifications. Unfortunately, this problem is undecidable in general. Finite model finding, the problem of finding a mapping of variables to finite sets of values, is decidable. Thus, finite model finding enables the automated verification of models at certain scopes (the number of elements in the domain of discourse of the problem). While finitizing a problem makes it solvable, as the scope of the problem increases, the problem can quickly become prohibitivly expensive to solve. Therefore, it is important to choose an efficient encoding of the problem for satisfiability (SAT) or satisfiability modulo theories (SMT) solvers when finitizing a problem. We propose improvements to encodings of many-sorted finite model finding problems for SMT solvers. We propose new encodings for finite integers and transitive closure. The key contributions of this thesis are that we: - Formulate Milicevic and Jackson's method for preventing overflows in integer problems using bitvectors as a transformation from/to a many-sorted first-order logic formula and extend it to support additional abstractions present in MSFOL - Propose an integer finitization method, called overflow-preventing finite integers (OPFI), that produces results closer to unbounded integers than Milicevic and Jackson's method, improving the correctness of the finitization with respect to the same problem over unbounded integers - Demonstrate that OPFI solves problems faster than our encoding of Milicevic and Jackson's method in an SMT solver, and does not solve problems significantly slower than unchecked (pure) bitvectors - Propose and prove the validity of negative transitive closure, an encoding of the transitive closure operator over a finite scope in first order logic for a special case of the use of transitive closure where pairs are only checked to not be in the transitive closure of a relation - Generalize existing encodings of transitive closure to relations of arity greater than~two - Demonstrate our new encoding of transitive closure performs faster than or as fast as existing encodings on models generated from Alloy model

    Planning problems as types, plans as programs : a dependent types infrastructure for verification and reasoning about automated plans in Agda

    Get PDF
    Historically, the Artificial Intelligence and programming language fields have had a mutually beneficial relationship. Typically, theoretical results in the programming language field have practical utility in the Artificial Intelligence field. One example of this that has roots in both declarative languages and theorem proving is AI planning. In recent years, new programming languages have been developed that are founded on dependent type theory. These languages are not only more expressive than traditional programming languages but also have the ability to represent and prove mathematical properties within the language. This thesis will explore how dependently typed languages can benefit the AI planning field. On one side this thesis will show how AI planning languages can be enriched with more expressivity and stronger verification guarantees. On the other, it will show that AI planning is an ideal field to illustrate the practical utility of largely theoretical aspects of programming language theory. This thesis will accomplish this by implementing multiple inference systems for plan validation in the dependently-typed programming language Agda. Importantly, these inference systems will be automated, and embody the Curry-Howard correspondence where plans will not only be proof-terms but also executable functions. This thesis will then show how the dependently-typed implementations of the inference systems can be further utilised to add enriched constraints over plan validation

    Level-p-complexity of Boolean functions using Thinning, Memoization, and Polynomials

    Full text link
    This paper describes a purely functional library for computing level-pp-complexity of Boolean functions, and applies it to two-level iterated majority. Boolean functions are simply functions from nn bits to one bit, and they can describe digital circuits, voting systems, etc. An example of a Boolean function is majority, which returns the value that has majority among the nn input bits for odd nn. The complexity of a Boolean function ff measures the cost of evaluating it: how many bits of the input are needed to be certain about the result of ff. There are many competing complexity measures but we focus on level-pp-complexity -- a function of the probability pp that a bit is 1. The level-pp-complexity Dp(f)D_p(f) is the minimum expected cost when the input bits are independent and identically distributed with Bernoulli(pp) distribution. We specify the problem as choosing the minimum expected cost of all possible decision trees -- which directly translates to a clearly correct, but very inefficient implementation. The library uses thinning and memoization for efficiency and type classes for separation of concerns. The complexity is represented using polynomials, and the order relation used for thinning is implemented using polynomial factorisation and root-counting. Finally we compute the complexity for two-level iterated majority and improve on an earlier result by J.~Jansson.Comment: 20 pages, 10 figure
    corecore