69 research outputs found

    Specialization of applications using shared libraries

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Generalizers: New Metaobjects for Generalized Dispatch

    Get PDF
    This paper introduces a new metaobject, the generalizer, which complements the existing specializer metaobject. With the help of examples, we show that this metaobject allows for the efficient implementation of complex non-class-based dispatch within the framework of existing metaobject protocols. We present our modifications to the generic function invocation protocol from the Art of the Metaobject Protocol; in combination with previous work, this produces a fully-functional extension of the existing mechanism for method selection and combination, including support for method combination completely independent from method selection. We discuss our implementation, within the SBCL implementation of Common Lisp, and in that context compare the performance of the new protocol with the standard one, demonstrating that the new protocol can be tolerably efficient

    An Automatic Program Generator for Multi-Level Specialization

    Get PDF
    Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical

    Finite Countermodel Based Verification for Program Transformation (A Case Study)

    Get PDF
    Both automatic program verification and program transformation are based on program analysis. In the past decade a number of approaches using various automatic general-purpose program transformation techniques (partial deduction, specialization, supercompilation) for verification of unreachability properties of computing systems were introduced and demonstrated. On the other hand, the semantics based unfold-fold program transformation methods pose themselves diverse kinds of reachability tasks and try to solve them, aiming at improving the semantics tree of the program being transformed. That means some general-purpose verification methods may be used for strengthening program transformation techniques. This paper considers the question how finite countermodels for safety verification method might be used in Turchin's supercompilation method. We extract a number of supercompilation sub-algorithms trying to solve reachability problems and demonstrate use of an external countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221

    Specializing Interpreters using Offline Partial Deduction

    No full text
    We present the latest version of the Logen partial evaluation system for logic programs. In particular we present new binding-types, and show how they can be used to effectively specialise a wide variety of interpreters.We show how to achieve Jones-optimality in a systematic way for several interpreters. Finally, we present and specialise a non-trivial interpreter for a small functional programming language. Experimental results are also presented, highlighting that the Logen system can be a good basis for generating compilers for high-level languages

    Advancing Understanding of Early Specialization in Youth Sport

    Get PDF
    Objective: The overarching purpose of this dissertation was to better understand early specialization, through two main objectives. The first objective was to determine research gaps in existing literature, and the second was to develop a valid tool for measuring specialization based on the identified gaps. Methods: In Chapter Two, a systematic review of the literature was conducted. Both non-data driven and data-driven studies were included to ensure a comprehensive understanding of the literature. Chapter Three describes a two-part study. In part one, 362 athletes were coded as specializers or non specializers depending on three different indicators used in previous research. In part two, 237 athletes were then coded to determine whether they were elite, pre-elite or nonelite in adulthood. Lastly, in Chapter Five, a Delphi-approach included 16 experts in the field to test elements of validity of the Sport Exposure Scale. Results: Findings from Chapter Two indicated inconsistent definitions and measures used in the literature and a clear discrepancy between key components of early specialization and approaches used to classify early specializers. Chapter Three results showed the proportion of athletes classified as specialisers varied depending on the method used and that there was no clear advantage or disadvantage to being a specializer based on skill-level achieved. Finally, in Chapter Five, the content and face validity of the Sport exposure Scale was established when the Delphi panellists reached consensus for each item. Conclusion: This dissertation highlighted gaps in the literature around early specialization and showed the implications of measurement imprecision. This dissertation attempted to provide a solution to these issues by creating the Sport Exposure Scale, which was designed to help advance not only our understanding of early specialization, but sport participation pathways in general. This dissertation provides areas for future research and has significant implications for research, stakeholders and society more broadly

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.
    corecore