1,263 research outputs found

    Covariance and Controvariance: a fresh look at an old issue (a primer in advanced type systems for learning functional programmers)

    Full text link
    Twenty years ago, in an article titled "Covariance and contravariance: conflict without a cause", I argued that covariant and contravariant specialization of method parameters in object-oriented programming had different purposes and deduced that, not only they could, but actually they should both coexist in the same language. In this work I reexamine the result of that article in the light of recent advances in (sub-)typing theory and programming languages, taking a fresh look at this old issue. Actually, the revamping of this problem is just an excuse for writing an essay that aims at explaining sophisticated type-theoretic concepts, in simple terms and by examples, to undergraduate computer science students and/or willing functional programmers. Finally, I took advantage of this opportunity to describe some undocumented advanced techniques of type-systems implementation that are known only to few insiders that dug in the code of some compilers: therefore, even expert language designers and implementers may find this work worth of reading

    The CIAO Multi-Dialect Compiler and System: An Experimentation Workbench for Future (C)LP Systems

    Full text link
    CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the ĂĄreas of program analysis and transformation already obtained with the system

    A Type System for Julia

    Full text link
    The Julia programming language was designed to fill the needs of scientific computing by combining the benefits of productivity and performance languages. Julia allows users to write untyped scripts easily without needing to worry about many implementation details, as do other productivity languages. If one just wants to get the work done-regardless of how efficient or general the program might be, such a paradigm is ideal. Simultaneously, Julia also allows library developers to write efficient generic code that can run as fast as implementations in performance languages such as C or Fortran. This combination of user-facing ease and library developer-facing performance has proven quite attractive, and the language has increasing adoption. With adoption comes combinatorial challenges to correctness. Multiple dispatch -- Julia's key mechanism for abstraction -- allows many libraries to compose "out of the box." However, it creates bugs where one library's requirements do not match what another provides. Typing could address this at the cost of Julia's flexibility for scripting. I developed a "best of both worlds" solution: gradual typing for Julia. My system forms the core of a gradual type system for Julia, laying the foundation for improving the correctness of Julia programs while not getting in the way of script writers. My framework allows methods to be individually typed or untyped, allowing users to write untyped code that interacts with typed library code and vice versa. Typed methods then get a soundness guarantee that is robust in the presence of both dynamically typed code and dynamically generated definitions. I additionally describe protocols, a mechanism for typing abstraction over concrete implementation that accommodates one common pattern in Julia libraries, and describe its implementation into my typed Julia framework.Comment: PhD thesi

    Transient Typechecks are (Almost) Free

    Get PDF
    Transient gradual typing imposes run-time type tests that typically cause a linear slowdown in programs’ performance. This performance impact discourages the use of type annotations because adding types to a program makes the program slower. A virtual machine can employ standard justin-time optimizations to reduce the overhead of transient checks to near zero. These optimizations can give gradually-typed languages performance comparable to state-of-the-art dynamic languages, so programmers can add types to their code without affecting their programs’ performance

    Open Programming Language Interpreters

    Get PDF
    Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them. Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations. Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it. Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations. Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters. Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations

    Description and Optimization of Abstract Machines in a Dialect of Prolog

    Full text link
    In order to achieve competitive performance, abstract machines for Prolog and related languages end up being large and intricate, and incorporate sophisticated optimizations, both at the design and at the implementation levels. At the same time, efficiency considerations make it necessary to use low-level languages in their implementation. This makes them laborious to code, optimize, and, especially, maintain and extend. Writing the abstract machine (and ancillary code) in a higher-level language can help tame this inherent complexity. We show how the semantics of most basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog. These descriptions are then compiled to C and assembled to build a complete bytecode emulator. Thanks to the high level of the language used and its closeness to Prolog, the abstract machine description can be manipulated using standard Prolog compilation and optimization techniques with relative ease. We also show how, by applying program transformations selectively, we obtain abstract machine implementations whose performance can match and even exceed that of state-of-the-art, highly-tuned, hand-crafted emulators.Comment: 56 pages, 46 figures, 5 tables, To appear in Theory and Practice of Logic Programming (TPLP

    Optimizing JavaScript Engines for Modern-day Workloads

    Get PDF
    In modern times, we have seen tremendous increase in popularity and usage of web-based applications. Applications such as presentation softwareand word processors, which were traditionally considered desktop applications are being ported to the web by compiling them to JavaScript. Since JavaScript is the de facto language of the web, JavaScript engine performance significantly affects the overall web application experience. JavaScript, initially intended solely as a client-side scripting language for web browsers, is now being used to implement server-side web applications (node.js) that traditionally have been written in languages like Java. Web application developers expect "C"-like performance out of their applications. Thus, there is a need to reevaluate the optimization strategies implemented in the modern day engines.Thesis statement: I propose that by using run-time and ahead-of-time profiling and type specialization techniques it is possible to improve the performance of JavaScript engines to cater to the needs of modern-day workloads.In this dissertation, we present an improved synergistic type specialization strategy for optimized JavaScript code execution, implemented on top of a research JavaScript engine called MuscalietJS. Our technique combines type feedback and type inference to reinforce and augment each other in a unique way. We then present a novel deoptimization strategy that enables type specialized code generation on top of typed, stack-based virtual machines like CLR. We also describe a server-side offline profiling technique to collect profile information for web application which helps client JavaScript engines (running in the browser) avoid deoptimizations and improve performance of the applications. Finally, we describe a technique to improve the performance of server-side JavaScript code by making use of intelligent profile caching and two new type stability heuristics

    Simple optimizing JIT compilation of higher-order dynamic programming languages

    Get PDF
    ImplĂ©menter efficacement les langages de programmation dynamiques demande beaucoup d’effort de dĂ©veloppement. Les compilateurs ne cessent de devenir de plus en plus complexes. Aujourd’hui, ils incluent souvent une phase d’interprĂ©tation, plusieurs phases de compilation, plusieurs reprĂ©sentations intermĂ©diaires et des analyses de code. Toutes ces techniques permettent d’implĂ©menter efficacement un langage de programmation dynamique, mais leur mise en oeuvre est difficile dans un contexte oĂč les ressources de dĂ©veloppement sont limitĂ©es. Nous proposons une nouvelle approche et de nouvelles techniques dynamiques permettant de dĂ©velopper des compilateurs performants pour les langages dynamiques avec de relativement bonnes performances et un faible effort de dĂ©veloppement. Nous prĂ©sentons une approche simple de compilation Ă  la volĂ©e qui permet d’implĂ©menter un langage en une seule phase de compilation, sans transformation vers des reprĂ©sentations intermĂ©diaires. Nous expliquons comment le versionnement de blocs de base, une technique de compilation existante, peut ĂȘtre Ă©tendue, sans effort de dĂ©veloppement significatif, pour fonctionner interprocĂ©duralement avec les langages de programmation d’ordre supĂ©rieur, permettant d’appliquer des optimisations interprocĂ©durales sur ces langages. Nous expliquons Ă©galement comment le versionnement de blocs de base permet de supprimer certaines opĂ©rations utilisĂ©es pour implĂ©menter les langages dynamiques et qui impactent les performances comme les vĂ©rifications de type. Nous expliquons aussi comment les compilateurs peuvent exploiter les reprĂ©sentations dynamiques des valeurs par Tagging et NaN-boxing pour optimiser le code gĂ©nĂ©rĂ© avec peu d’effort de dĂ©veloppement. Nous prĂ©sentons Ă©galement notre expĂ©rience de dĂ©veloppement d’un compilateur Ă  la volĂ©e pour le langage de programmation Scheme, pour montrer que ces techniques permettent effectivement de construire un compilateur avec un effort moins important que les compilateurs actuels et qu’elles permettent de gĂ©nĂ©rer du code efficace, qui rivalise avec les meilleures implĂ©mentations du langage Scheme.Efficiently implementing dynamic programming languages requires a significant development effort. Over the years, compilers have become more complex. Today, they typically include an interpretation phase, several compilation phases, several intermediate representations and code analyses. These techniques allow efficiently implementing these programming languages but are difficult to implement in contexts in which development resources are limited. We propose a new approach and new techniques to build optimizing just-in-time compilers for dynamic languages with relatively good performance and low development effort. We present a simple just-in-time compilation approach to implement a language with a single compilation phase, without the need to use code transformations to intermediate representations. We explain how basic block versioning, an existing compilation technique, can be extended without significant development effort, to work interprocedurally with higherorder programming languages allowing interprocedural optimizations on these languages. We also explain how basic block versioning allows removing operations used to implement dynamic languages that degrade performance, such as type checks, and how compilers can use Tagging and NaN-boxing to optimize the generated code with low development effort. We present our experience of building a JIT compiler using these techniques for the Scheme programming language to show that they indeed allow building compilers with less development effort than other implementations and that they allow generating efficient code that competes with current mature implementations of the Scheme language
    • 

    corecore