69 research outputs found
Generalizers: New Metaobjects for Generalized Dispatch
This paper introduces a new metaobject, the generalizer, which complements the existing specializer metaobject. With the help of examples, we show that this metaobject allows for the efficient implementation of complex non-class-based dispatch within the framework of existing metaobject protocols. We present our modifications to the generic function invocation protocol from the Art of the Metaobject Protocol; in combination with previous work, this produces a fully-functional extension of the existing mechanism for method selection and combination, including support for method combination completely independent from method selection. We discuss our implementation, within the SBCL implementation of Common Lisp, and in that context compare the performance of the new protocol with the standard one, demonstrating that the new protocol can be tolerably efficient
An Automatic Program Generator for Multi-Level Specialization
Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical
Recommended from our members
Collapsing towers of interpreters
Given a tower of interpreters, i.e., a sequence of multiple interpreters interpreting one another as input programs, we aim to collapse this tower into a compiler that removes all interpretive overhead and runs in a single pass. In the real world, a use case might be Python code executed by an x86 runtime, on a CPU emulated in a JavaScript VM, running on an ARM CPU. Collapsing such a tower can not only exponentially improve runtime performance, but also enable the use of base-language tools for interpreted programs, e.g., for analysis and verification. In this paper, we lay the foundations in an idealized but realistic setting.
We present a multi-level lambda calculus that features staging constructs and stage polymorphism: based on runtime parameters, an evaluator either executes source code (thereby acting as an interpreter) or generates code (thereby acting as a compiler). We identify stage polymorphism, a programming model from the domain of high-performance program generators, as the key mechanism to make such interpreters compose in a collapsible way.
We present Pink, a meta-circular Lisp-like evaluator on top of this calculus, and demonstrate that we can collapse arbitrarily many levels of self-interpretation, including levels with semantic modifications. We discuss several examples: compiling regular expressions through an interpreter to base code, building program transformers from modi ed interpreters, and others. We develop these ideas further to include reflection and reification, culminating in Purple, a reflective language inspired by Brown, Blond, and Black, which realizes a conceptually infinite tower, where every aspect of the semantics can change dynamically. Addressing an open challenge, we show how user programs can be compiled and recompiled under user-modified semantics.Parts of this research were supported by ERC grant 321217, NSF awards 1553471 and 1564207, and DOE award DE-SC0018050
Finite Countermodel Based Verification for Program Transformation (A Case Study)
Both automatic program verification and program transformation are based on
program analysis. In the past decade a number of approaches using various
automatic general-purpose program transformation techniques (partial deduction,
specialization, supercompilation) for verification of unreachability properties
of computing systems were introduced and demonstrated. On the other hand, the
semantics based unfold-fold program transformation methods pose themselves
diverse kinds of reachability tasks and try to solve them, aiming at improving
the semantics tree of the program being transformed. That means some
general-purpose verification methods may be used for strengthening program
transformation techniques. This paper considers the question how finite
countermodels for safety verification method might be used in Turchin's
supercompilation method. We extract a number of supercompilation sub-algorithms
trying to solve reachability problems and demonstrate use of an external
countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221
Specializing Interpreters using Offline Partial Deduction
We present the latest version of the Logen partial evaluation system for logic programs. In particular we present new binding-types, and show how they can be used to effectively specialise a wide variety of interpreters.We show how to achieve Jones-optimality in a systematic way for several interpreters. Finally, we present and specialise a non-trivial interpreter for a small functional programming language. Experimental results are also presented, highlighting that the Logen system can be a good basis for generating compilers for high-level languages
Advancing Understanding of Early Specialization in Youth Sport
Objective: The overarching purpose of this dissertation was to better understand early
specialization, through two main objectives. The first objective was to determine research gaps
in existing literature, and the second was to develop a valid tool for measuring specialization
based on the identified gaps.
Methods: In Chapter Two, a systematic review of the literature was conducted. Both non-data
driven and data-driven studies were included to ensure a comprehensive understanding of the
literature. Chapter Three describes a two-part study. In part one, 362 athletes were coded as
specializers or non specializers depending on three different indicators used in previous research.
In part two, 237 athletes were then coded to determine whether they were elite, pre-elite or nonelite
in adulthood. Lastly, in Chapter Five, a Delphi-approach included 16 experts in the field to
test elements of validity of the Sport Exposure Scale.
Results: Findings from Chapter Two indicated inconsistent definitions and measures used in the
literature and a clear discrepancy between key components of early specialization and
approaches used to classify early specializers. Chapter Three results showed the proportion of
athletes classified as specialisers varied depending on the method used and that there was no
clear advantage or disadvantage to being a specializer based on skill-level achieved. Finally, in
Chapter Five, the content and face validity of the Sport exposure Scale was established when the
Delphi panellists reached consensus for each item.
Conclusion: This dissertation highlighted gaps in the literature around early specialization and
showed the implications of measurement imprecision. This dissertation attempted to provide a
solution to these issues by creating the Sport Exposure Scale, which was designed to help
advance not only our understanding of early specialization, but sport participation pathways in
general. This dissertation provides areas for future research and has significant implications for
research, stakeholders and society more broadly
Speculative Staging for Interpreter Optimization
Interpreters have a bad reputation for having lower performance than
just-in-time compilers. We present a new way of building high performance
interpreters that is particularly effective for executing dynamically typed
programming languages. The key idea is to combine speculative staging of
optimized interpreter instructions with a novel technique of incrementally and
iteratively concerting them at run-time.
This paper introduces the concepts behind deriving optimized instructions
from existing interpreter instructions---incrementally peeling off layers of
complexity. When compiling the interpreter, these optimized derivatives will be
compiled along with the original interpreter instructions. Therefore, our
technique is portable by construction since it leverages the existing
compiler's backend. At run-time we use instruction substitution from the
interpreter's original and expensive instructions to optimized instruction
derivatives to speed up execution.
Our technique unites high performance with the simplicity and portability of
interpreters---we report that our optimization makes the CPython interpreter up
to more than four times faster, where our interpreter closes the gap between
and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.
- …