3,723 research outputs found
Open Programming Language Interpreters
Context: This paper presents the concept of open programming language
interpreters and the implementation of a framework-level metaobject protocol
(MOP) to support them. Inquiry: We address the problem of dynamic interpreter
adaptation to tailor the interpreter's behavior on the task to be solved and to
introduce new features to fulfill unforeseen requirements. Many languages
provide a MOP that to some degree supports reflection. However, MOPs are
typically language-specific, their reflective functionality is often
restricted, and the adaptation and application logic are often mixed which
hardens the understanding and maintenance of the source code. Our system
overcomes these limitations. Approach: We designed and implemented a system to
support open programming language interpreters. The prototype implementation is
integrated in the Neverlang framework. The system exposes the structure,
behavior and the runtime state of any Neverlang-based interpreter with the
ability to modify it. Knowledge: Our system provides a complete control over
interpreter's structure, behavior and its runtime state. The approach is
applicable to every Neverlang-based interpreter. Adaptation code can
potentially be reused across different language implementations. Grounding:
Having a prototype implementation we focused on feasibility evaluation. The
paper shows that our approach well addresses problems commonly found in the
research literature. We have a demonstrative video and examples that illustrate
our approach on dynamic software adaptation, aspect-oriented programming,
debugging and context-aware interpreters. Importance: To our knowledge, our
paper presents the first reflective approach targeting a general framework for
language development. Our system provides full reflective support for free to
any Neverlang-based interpreter. We are not aware of any prior application of
open implementations to programming language interpreters in the sense defined
in this paper. Rather than substituting other approaches, we believe our system
can be used as a complementary technique in situations where other approaches
present serious limitations
Simple and Effective Type Check Removal through Lazy Basic Block Versioning
Dynamically typed programming languages such as JavaScript and Python defer
type checking to run time. In order to maximize performance, dynamic language
VM implementations must attempt to eliminate redundant dynamic type checks.
However, type inference analyses are often costly and involve tradeoffs between
compilation time and resulting precision. This has lead to the creation of
increasingly complex multi-tiered VM architectures.
This paper introduces lazy basic block versioning, a simple JIT compilation
technique which effectively removes redundant type checks from critical code
paths. This novel approach lazily generates type-specialized versions of basic
blocks on-the-fly while propagating context-dependent type information. This
does not require the use of costly program analyses, is not restricted by the
precision limitations of traditional type analyses and avoids the
implementation complexity of speculative optimization techniques.
We have implemented intraprocedural lazy basic block versioning in a
JavaScript JIT compiler. This approach is compared with a classical flow-based
type analysis. Lazy basic block versioning performs as well or better on all
benchmarks. On average, 71% of type tests are eliminated, yielding speedups of
up to 50%. We also show that our implementation generates more efficient
machine code than TraceMonkey, a tracing JIT compiler for JavaScript, on
several benchmarks. The combination of implementation simplicity, low
algorithmic complexity and good run time performance makes basic block
versioning attractive for baseline JIT compilers
Interprocedural Type Specialization of JavaScript Programs Without Type Analysis
Dynamically typed programming languages such as Python and JavaScript defer
type checking to run time. VM implementations can improve performance by
eliminating redundant dynamic type checks. However, type inference analyses are
often costly and involve tradeoffs between compilation time and resulting
precision. This has lead to the creation of increasingly complex multi-tiered
VM architectures.
Lazy basic block versioning is a simple JIT compilation technique which
effectively removes redundant type checks from critical code paths. This novel
approach lazily generates type-specialized versions of basic blocks on-the-fly
while propagating context-dependent type information. This approach does not
require the use of costly program analyses, is not restricted by the precision
limitations of traditional type analyses.
This paper extends lazy basic block versioning to propagate type information
interprocedurally, across function call boundaries. Our implementation in a
JavaScript JIT compiler shows that across 26 benchmarks, interprocedural basic
block versioning eliminates more type tag tests on average than what is
achievable with static type analysis without resorting to code transformations.
On average, 94.3% of type tag tests are eliminated, yielding speedups of up to
56%. We also show that our implementation is able to outperform Truffle/JS on
several benchmarks, both in terms of execution time and compilation time.Comment: 10 pages, 10 figures, submitted to CGO 201
Recommended from our members
Collapsing towers of interpreters
Given a tower of interpreters, i.e., a sequence of multiple interpreters interpreting one another as input programs, we aim to collapse this tower into a compiler that removes all interpretive overhead and runs in a single pass. In the real world, a use case might be Python code executed by an x86 runtime, on a CPU emulated in a JavaScript VM, running on an ARM CPU. Collapsing such a tower can not only exponentially improve runtime performance, but also enable the use of base-language tools for interpreted programs, e.g., for analysis and verification. In this paper, we lay the foundations in an idealized but realistic setting.
We present a multi-level lambda calculus that features staging constructs and stage polymorphism: based on runtime parameters, an evaluator either executes source code (thereby acting as an interpreter) or generates code (thereby acting as a compiler). We identify stage polymorphism, a programming model from the domain of high-performance program generators, as the key mechanism to make such interpreters compose in a collapsible way.
We present Pink, a meta-circular Lisp-like evaluator on top of this calculus, and demonstrate that we can collapse arbitrarily many levels of self-interpretation, including levels with semantic modifications. We discuss several examples: compiling regular expressions through an interpreter to base code, building program transformers from modi ed interpreters, and others. We develop these ideas further to include reflection and reification, culminating in Purple, a reflective language inspired by Brown, Blond, and Black, which realizes a conceptually infinite tower, where every aspect of the semantics can change dynamically. Addressing an open challenge, we show how user programs can be compiled and recompiled under user-modified semantics.Parts of this research were supported by ERC grant 321217, NSF awards 1553471 and 1564207, and DOE award DE-SC0018050
Speculative Staging for Interpreter Optimization
Interpreters have a bad reputation for having lower performance than
just-in-time compilers. We present a new way of building high performance
interpreters that is particularly effective for executing dynamically typed
programming languages. The key idea is to combine speculative staging of
optimized interpreter instructions with a novel technique of incrementally and
iteratively concerting them at run-time.
This paper introduces the concepts behind deriving optimized instructions
from existing interpreter instructions---incrementally peeling off layers of
complexity. When compiling the interpreter, these optimized derivatives will be
compiled along with the original interpreter instructions. Therefore, our
technique is portable by construction since it leverages the existing
compiler's backend. At run-time we use instruction substitution from the
interpreter's original and expensive instructions to optimized instruction
derivatives to speed up execution.
Our technique unites high performance with the simplicity and portability of
interpreters---we report that our optimization makes the CPython interpreter up
to more than four times faster, where our interpreter closes the gap between
and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.
A Tracing JIT Compiler for Erlang using LLVM
We have modified the Erlang runtime to add support for a tracing just-in-time (JIT) compiler, similar to Mozilla’s TraceMonkey. Tracing is a technique to augment an existing interpreter with a JIT simply by recording the instructions executed during a loop iteration, and then generate optimized native code from this. Tracing compilers are particularly suited to optimize number crunching tight loops, an area where Erlang traditionally has been lacking. We make use of the LLVM compiler library to optimize and emit native code. In micro benchmarks we show some major improvements, reducing execution time by up to 75%. However, from an engineering point of view, we conclude that the effort of an industrial strength implementation would be substantial – essentially reimplementing large parts of Erlang’s interpreter – and discuss a potential solution based on recent research in the area.Nästan alla moderna programspråk använder en interpretator – en flexibel och praktisk om än långsam lösning. Vi prövar ett enkelt sätt att kraftigt öka prestandan på Erlangs interpretator
- …