287 research outputs found

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.

    Efficient normalization by evaluation

    Get PDF
    International audienceDependently typed theorem provers allow arbitrary terms in types. It is convenient to identify large classes of terms during type checking, hence many such systems provision some form of conversion rule. A standard algorithm for testing the convertibility of two types consists in normalizing them, then testing for syntactic equality of the normal forms. Normalization by evaluation is a standard technique enabling the use of existing compilers and runtimes for functional languages to implement normalizers, without peaking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying) and the reuse of pattern matching facilities of the evaluator for datatypes, we may obtain a normalizer that evaluates non-functional values about as fast as the underlying evaluator, but as an added benefit can also fully normalize functional values — or to put it another way, partially evaluates functions efficiently

    Type-Directed Weaving of Aspects for Polymorphically Typed Functional Languages

    Get PDF
    Incorporating aspect-oriented paradigm to a polymorphically typed functional language enables the declaration of type-scoped advice, in which the effect of an aspect can be harnessed by introducing possibly polymorphic type constraints to the aspect. The amalgamation of aspect orientation and functional programming enables quick behavioral adaption of functions, clear separation of concerns and expressive type-directed programming. However, proper static weaving of aspects in polymorphic languages with a type-erasure semantics remains a challenge. In this paper, we describe a type-directed static weaving strategy, as well as its implementation, that supports static type inference and static weaving of programs written in an aspect-oriented polymorphically typed functional language, AspectFun. We show examples of type-scoped advice, identify the challenges faced with compile-time weaving in the presence of type-scoped advice, and demonstrate how various advanced aspect features can be handled by our techniques. Lastly, we prove the correctness of the static weaving strategy with respect to the operational semantics of AspectFun

    Type-safe pattern combinators

    Get PDF

    Normalization by Evaluation with Typed Abstract Syntax

    Get PDF
    We present a simple way to implement typed abstract syntax for thelambda calculus in Haskell, using phantom types, and we specify normalization by evaluation (i.e., type-directed partial evaluation) to yield thistyped abstract syntax. Proving that normalization by evaluation preserves types and yields normal forms then reduces to type-checking thespecification

    Meta-F*: Proof Automation with SMT, Tactics, and Metaprograms

    Full text link
    We introduce Meta-F*, a tactics and metaprogramming framework for the F* program verifier. The main novelty of Meta-F* is allowing the use of tactics and metaprogramming to discharge assertions not solvable by SMT, or to just simplify them into well-behaved SMT fragments. Plus, Meta-F* can be used to generate verified code automatically. Meta-F* is implemented as an F* effect, which, given the powerful effect system of F*, heavily increases code reuse and even enables the lightweight verification of metaprograms. Metaprograms can be either interpreted, or compiled to efficient native code that can be dynamically loaded into the F* type-checker and can interoperate with interpreted code. Evaluation on realistic case studies shows that Meta-F* provides substantial gains in proof development, efficiency, and robustness.Comment: Full version of ESOP'19 pape

    A Transformation-Based Foundation for Semantics-Directed Code Generation

    Get PDF
    Interpreters and compilers are two different ways of implementing programming languages. An interpreter directly executes its program input. It is a concise definition of the semantics of a programming language and is easily implemented. A compiler translates its program input into another language. It is more difficult to construct, but the code that it generates runs faster than interpreted code. In this dissertation, we propose a transformation-based foundation for deriving compilers from semantic specifications in the form of four rules. These rules give apriori advice for staging, and allow explicit compiler derivation that would be less succinct with partial evaluation. When applied, these rules turn an interpreter that directly executes its program input into a compiler that emits the code that the interpreter would have executed. We formalize the language syntax and semantics to be used for the interpreter and the compiler, and also specify a notion of equality. It is then possible to precisely state the transformation rules and to prove both local and global correctness theorems. And although the transformation rules were developed so as to apply to an interpreter written in a denotational style, we consider how to modify non-denotational interpreters so that the rules apply. Finally, we illustrate these ideas by considering a larger example: a Prolog implementation
    corecore