15 research outputs found

    Levity Polymorphism

    Get PDF
    Parametric polymorphism is one of the linchpins of modern typed programming, but it comes with a real performance penalty. We describe this penalty; offer a principled way to reason about it (kinds as calling conventions); and propose levity polymorphism. This new form of polymorphism allows abstractions over calling conventions; we detail and verify restrictions that are necessary in order to compile levity-polymorphic functions. Levity polymorphism has created new opportunities in Haskell, including the ability to generalize nearly half of the type classes in GHC\u27s standard library

    SoD2^2: Statically Optimizing Dynamic Deep Neural Network

    Full text link
    Though many compilation and runtime systems have been developed for DNNs in recent years, the focus has largely been on static DNNs. Dynamic DNNs, where tensor shapes and sizes and even the set of operators used are dependent upon the input and/or execution, are becoming common. This paper presents SoD2^2, a comprehensive framework for optimizing Dynamic DNNs. The basis of our approach is a classification of common operators that form DNNs, and the use of this classification towards a Rank and Dimension Propagation (RDP) method. This framework statically determines the shapes of operators as known constants, symbolic constants, or operations on these. Next, using RDP we enable a series of optimizations, like fused code generation, execution (order) planning, and even runtime memory allocation plan generation. By evaluating the framework on 10 emerging Dynamic DNNs and comparing it against several existing systems, we demonstrate both reductions in execution latency and memory requirements, with RDP-enabled key optimizations responsible for much of the gains. Our evaluation results show that SoD2^2 runs up to 3.9×3.9\times faster than these systems while saving up to 88%88\% peak memory consumption

    Semi-continuous Sized Types and Termination

    Full text link
    Some type-based approaches to termination use sized types: an ordinal bound for the size of a data structure is stored in its type. A recursive function over a sized type is accepted if it is visible in the type system that recursive calls occur just at a smaller size. This approach is only sound if the type of the recursive function is admissible, i.e., depends on the size index in a certain way. To explore the space of admissible functions in the presence of higher-kinded data types and impredicative polymorphism, a semantics is developed where sized types are interpreted as functions from ordinals into sets of strongly normalizing terms. It is shown that upper semi-continuity of such functions is a sufficient semantic criterion for admissibility. To provide a syntactical criterion, a calculus for semi-continuous functions is developed.Comment: 33 pages, extended version of CSL'0

    Safety and conservativity of definitions in HOL and Isabelle/HOL

    Get PDF
    Definitions are traditionally considered to be a safe mechanism for introducing concepts on top of a logic known to be consistent. In contrast to arbitrary axioms, definitions should in principle be treatable as a form of abbreviation, and thus compiled away from the theory without losing provability. In particular, definitions should form a conservative extension of the pure logic. These properties are crucial for modern interactive theorem provers, since they ensure the consistency of the logic, as well as a valid environment for total/certified functional programming. We prove these properties, namely, safety and conservativity, for Higher-Order Logic (HOL), a logic implemented in several mainstream theorem provers and relied upon by thousands of users. Some unique features of HOL, such as the requirement to give non-emptiness proofs when defining new types and the impossibility to unfold type definitions, make the proof of these properties, and also the very formulation of safety, nontrivial. Our study also factors in the essential variation of HOL definitions featured by Isabelle/HOL, a popular member of the HOL-based provers family. The current work improves on recent results which showed a weaker property, consistency of Isabelle/HOL’s definitions

    Programming errors in traversal programs over structured data

    Get PDF
    Traversal strategies \'a la Stratego (also \'a la Strafunski and 'Scrap Your Boilerplate') provide an exceptionally versatile and uniform means of querying and transforming deeply nested and heterogeneously structured data including terms in functional programming and rewriting, objects in OO programming, and XML documents in XML programming. However, the resulting traversal programs are prone to programming errors. We are specifically concerned with errors that go beyond conservative type errors; examples we examine include divergent traversals, prematurely terminated traversals, and traversals with dead code. Based on an inventory of possible programming errors we explore options of static typing and static analysis so that some categories of errors can be avoided. This exploration generates suggestions for improvements to strategy libraries as well as their underlying programming languages. Haskell is used for illustrations and specifications with sufficient explanations to make the presentation comprehensible to the non-specialist. The overall ideas are language-agnostic and they are summarized accordingly

    Safety and conservativity of definitions in HOL and Isabelle/HOL

    Get PDF
    Definitions are traditionally considered to be a safe mechanism for introducing concepts on top of a logic known to be consistent. In contrast to arbitrary axioms, definitions should in principle be treatable as a form of abbreviation, and thus compiled away from the theory without losing provability. In particular, definitions should form a conservative extension of the pure logic. These properties are crucial for modern interactive theorem provers, since they ensure the consistency of the logic, as well as a valid environment for total/certified functional programming. We prove these properties, namely, safety and conservativity, for Higher-Order Logic (HOL), a logic implemented in several mainstream theorem provers and relied upon by thousands of users. Some unique features of HOL, such as the requirement to give non-emptiness proofs when defining new types and the impossibility to unfold type definitions, make the proof of these properties, and also the very formulation of safety, nontrivial. Our study also factors in the essential variation of HOL definitions featured by Isabelle/HOL, a popular member of the HOL-based provers family. The current work improves on recent results which showed a weaker property, consistency of Isabelle/HOL’s definitions

    Verified programming with explicit coercions

    Get PDF
    Type systems have proved to be a powerful means of specifying and proving important program invariants. In dependently typed programming languages types can depend on values and hence express arbitrarily complicated propositions and their machine checkable proofs. The type-based approach to program specification allows for the programmer to not only transcribe their intentions, but arranges for their direct involvement in the proving process, thus aiding the machine in its attempt to satisfy difficult obligations. In this thesis we develop a series of patterns for programming in a correct-by-construction style making use of constraints and coercions to prove properties within a dependently typed host. This allows for the development of a verified, kernel which can be built upon using the host system features. In particular this should allow for the development of “tactics” or semiautomated solvers invoked when coercing types all within a single language. The efficacy of this approach is given by the development of a system of expressions indexed by their, exposing a case analysis feature serving to generate value constraints. These constraints are directly reflected into the host allowing for their involvement in the type-checking process. A motivating use case of this design shows how a term’s semantic index information admits an exact, formalized cost analysis amenable to reasoning within the host. Finally we show how such a system is used to identify unreachable dead-code, trivially admitting the design and verification of an SSA style compiler with this optimization. We think such a design of explicitly proving the local correctness of type-transformations in the presence of accumulated constraints can form the basis of a flexible language in concert with a variety of trusted solver

    Des types aux assertions logiques : preuve automatique ou assistée de propriétés sur les programmes fonctionnels.

    Get PDF
    This work studies two approaches to improve the safety of computer programs using static analysis.The first one is typing which guarantees that the evaluation of program cannot fail. The functionallanguage ML has a very rich type system and also an algorithm that infers automatically the types.We focus on its adaptation to generalized algebraic data types (GADTs). In this setting, efficientcomputation of a most general type is impossible. We propose a stratification of the language thatretain the usual characteristics of the ML fragment and make explicit the use of GADTs. The re-sulting language, MLGX, entails a burden on the programmer who must annotate its programs toomuch. A second stratum, MLGI, offers a mechanism to infer locally, in a predictable and efficient way,incomplete yet, most of the type annotations. The first part concludes on an illustration of the expres-siveness of GADTs to encode the invariants of pushdown automata used in LR parsing. The secondapproach augments the language with logic assertions that enables arbitrarily complex specificationsto be expressed. We check the compliance of the program semantics with respect to these specifica-tions thanks to a method called Hoare logic and thanks to semi-automatic computer-based proofs.The design choices permit to handle first-class functions. They are directed by an implementationwhich is illustrated by the certification of a module of trees that denote finite sets.Cette thĂšse Ă©tudie deux approches fondĂ©es sur l’analyse statique pour augmenter la sĂ»retĂ© defonctionnement et la correction des programmes informatiques.La premiĂšre approche est le typage qui permet de prouver automatiquement qu’un programmes’évalue sans Ă©chouer. Le langage fonctionnel ML possĂšde un systĂšme de type trĂšs riche et un algorithmeeffectuant une synthĂšse automatique de ces types. On s’intĂ©resse Ă  l’adaptation de cet algorithme auxtypes algĂ©briques gĂ©nĂ©ralisĂ©s (GADT), une forme restreinte des inductifs de Coq, qui ont Ă©tĂ© introduitspar Hongwei Xi en 2003.Dans ce cadre, le calcul efficace d’un type plus gĂ©nĂ©ral est impossible. On propose une stratificationqui maintient les caractĂ©ristiques habituelles sur le fragment ML et qui isole le traitement des GADTen explicitant leur utilisation. Le langage obtenu, MLGX, nĂ©cessite des annotations de type qui alour-dissent les programmes. Une seconde strate, MLGI, offre au programmeur un mĂ©canisme de synthĂšselocale, prĂ©dictible et efficace bien qu’incomplet, de la plupart de ces annotations. La premiĂšre parties’achĂšve avec une dĂ©monstration de l’expressivitĂ© des GADT pour coder les invariants des automatesĂ  pile utilisĂ©s par l’analyse syntaxique LR.La seconde approche augmente le langage de programmation par des assertions logiques permettantd’exprimer des spĂ©cifications de complexitĂ© arbitraire dans la logique d’ordre supĂ©rieur polymorphi-quement typĂ©e. On vĂ©rifie statiquement la conformitĂ© de la sĂ©mantique du programme vis-Ă -vis de cesspĂ©cifications Ă  l’aide d’une technique appelĂ©e logique de Hoare qui consiste Ă  engendrer un ensembled’obligations de preuves Ă  partir d’un programme annotĂ©. Une fois ces obligations de preuve traitĂ©es,si un programme est utilisĂ© correctement et si il renvoie une valeur alors il est certain que celle-ci estcorrecte.Habituellement, cette technique est employĂ©e sur les langages impĂ©ratifs. Avec un langage fonc-tionnel pur, les problĂšmes liĂ©s Ă  l’état de la mĂ©moire d’évanouissent tandis que l’ordre supĂ©rieur etle polymorphisme en posent de nouveaux. Nos choix de conceptions cherchent Ă  maximiser les op-portunitĂ©s d’utilisation de prouveurs automatiques en traduisant minutieusement les objets d’ordresupĂ©rieur en objets du premier ordre. Une implantation prototype du systĂšme en fournit une illustra-tion dans la preuve presque totalement automatique d’un module CAML d’arbres Ă©quilibrĂ©s dĂ©notantdes ensembles finis
    corecore