17,298 research outputs found

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.

    Towards an Adaptive Skeleton Framework for Performance Portability

    Get PDF
    The proliferation of widely available, but very different, parallel architectures makes the ability to deliver good parallel performance on a range of architectures, or performance portability, highly desirable. Irregularly-parallel problems, where the number and size of tasks is unpredictable, are particularly challenging and require dynamic coordination. The paper outlines a novel approach to delivering portable parallel performance for irregularly parallel programs. The approach combines declarative parallelism with JIT technology, dynamic scheduling, and dynamic transformation. We present the design of an adaptive skeleton library, with a task graph implementation, JIT trace costing, and adaptive transformations. We outline the architecture of the protoype adaptive skeleton execution framework in Pycket, describing tasks, serialisation, and the current scheduler.We report a preliminary evaluation of the prototype framework using 4 micro-benchmarks and a small case study on two NUMA servers (24 and 96 cores) and a small cluster (17 hosts, 272 cores). Key results include Pycket delivering good sequential performance e.g. almost as fast as C for some benchmarks; good absolute speedups on all architectures (up to 120 on 128 cores for sumEuler); and that the adaptive transformations do improve performance

    Towards Systemic Evaluation

    Get PDF
    Problems of conventional evaluation models can be understood as an impoverished ā€˜conversationā€™ between realities (of non-linearity, indeterminate attributes, and ever-changing context), and models of evaluating such realities. Meanwhile, ideas of systems thinking and complexity scienceā€”grouped here under the acronym STCSā€”struggle to gain currency in the big ā€˜Eā€™ world of institutionalized evaluation. Four evaluation practitioners familiar with evaluation tools associated with STCS offer perspectives on issues regarding mainstream uptake of STCS in the big ā€˜Eā€™ world. The perspectives collectively suggest three features of practicing systemic evaluation: (i) developing value in conversing between bounded values (evaluations) and unbounded reality (evaluand), with humility; (ii) developing response-ability with evaluand stakeholders based on reflexivity, with empathy; and (iii) developing adaptive rather than mere contingent use(fulness) of STCS ā€˜toolsā€™ as part of evaluation praxis, with inevitable fallibility and an orientation towards bricolage (adaptive use). The features hint towards systemic evaluation as core to a reconfigured notion of developmental evaluation

    SL: a "quick and dirty" but working intermediate language for SVP systems

    Get PDF
    The CSA group at the University of Amsterdam has developed SVP, a framework to manage and program many-core and hardware multithreaded processors. In this article, we introduce the intermediate language SL, a common vehicle to program SVP platforms. SL is designed as an extension to the standard C language (ISO C99/C11). It includes primitive constructs to bulk create threads, bulk synchronize on termination of threads, and communicate using word-sized dataflow channels between threads. It is intended for use as target language for higher-level parallelizing compilers. SL is a research vehicle; as of this writing, it is the only interface language to program a main SVP platform, the new Microgrid chip architecture. This article provides an overview of the language, to complement a detailed specification available separately.Comment: 22 pages, 3 figures, 18 listings, 1 tabl

    AMaĻ‡oSā€”Abstract Machine for Xcerpt

    Get PDF
    Web query languages promise convenient and efficient access to Web data such as XML, RDF, or Topic Maps. Xcerpt is one such Web query language with strong emphasis on novel high-level constructs for effective and convenient query authoring, particularly tailored to versatile access to data in different Web formats such as XML or RDF. However, so far it lacks an efficient implementation to supplement the convenient language features. AMaĻ‡oS is an abstract machine implementation for Xcerpt that aims at efficiency and ease of deployment. It strictly separates compilation and execution of queries: Queries are compiled once to abstract machine code that consists in (1) a code segment with instructions for evaluating each rule and (2) a hint segment that provides the abstract machine with optimization hints derived by the query compilation. This article summarizes the motivation and principles behind AMaĻ‡oS and discusses how its current architecture realizes these principles

    Tupleware: Redefining Modern Analytics

    Full text link
    There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world---petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to a few terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware's architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems

    Digital zero noise extrapolation for quantum error mitigation

    Full text link
    Zero-noise extrapolation (ZNE) is an increasingly popular technique for mitigating errors in noisy quantum computations without using additional quantum resources. We review the fundamentals of ZNE and propose several improvements to noise scaling and extrapolation, the two key components in the technique. We introduce unitary folding and parameterized noise scaling. These are digital noise scaling frameworks, i.e. one can apply them using only gate-level access common to most quantum instruction sets. We also study different extrapolation methods, including a new adaptive protocol that uses a statistical inference framework. Benchmarks of our techniques show error reductions of 18X to 24X over non-mitigated circuits and demonstrate ZNE effectiveness at larger qubit numbers than have been tested previously. In addition to presenting new results, this work is a self-contained introduction to the practical use of ZNE by quantum programmers.Comment: 11 pages, 7 figure
    • ā€¦
    corecore