9,808 research outputs found
Simple and Effective Type Check Removal through Lazy Basic Block Versioning
Dynamically typed programming languages such as JavaScript and Python defer
type checking to run time. In order to maximize performance, dynamic language
VM implementations must attempt to eliminate redundant dynamic type checks.
However, type inference analyses are often costly and involve tradeoffs between
compilation time and resulting precision. This has lead to the creation of
increasingly complex multi-tiered VM architectures.
This paper introduces lazy basic block versioning, a simple JIT compilation
technique which effectively removes redundant type checks from critical code
paths. This novel approach lazily generates type-specialized versions of basic
blocks on-the-fly while propagating context-dependent type information. This
does not require the use of costly program analyses, is not restricted by the
precision limitations of traditional type analyses and avoids the
implementation complexity of speculative optimization techniques.
We have implemented intraprocedural lazy basic block versioning in a
JavaScript JIT compiler. This approach is compared with a classical flow-based
type analysis. Lazy basic block versioning performs as well or better on all
benchmarks. On average, 71% of type tests are eliminated, yielding speedups of
up to 50%. We also show that our implementation generates more efficient
machine code than TraceMonkey, a tracing JIT compiler for JavaScript, on
several benchmarks. The combination of implementation simplicity, low
algorithmic complexity and good run time performance makes basic block
versioning attractive for baseline JIT compilers
Recommended from our members
A platform for semantic web studies
The Semantic Web can be seen as a large, heterogeneous network of ontologies and semantic documents. Characterizing these ontologies, the way they relate and the way they are organized can help in better understanding how knowledge is produced and published online. It also provides new ways to explore and exploit this large collection of ontologies. In this paper, we present the foundation of a research platform for characterizing the Semantic Web, relying on the collection of ontologies and the functionalities provided by the Watson Semantic Web search engine. We more specifically focus on formalizing and monitoring relationships between ontologies online, considering a variety of different relations (similarity, versioning, agreement, modularity) and how they can help us obtaining meaningful overviews of the current state of the Semantic Web
Interprocedural Type Specialization of JavaScript Programs Without Type Analysis
Dynamically typed programming languages such as Python and JavaScript defer
type checking to run time. VM implementations can improve performance by
eliminating redundant dynamic type checks. However, type inference analyses are
often costly and involve tradeoffs between compilation time and resulting
precision. This has lead to the creation of increasingly complex multi-tiered
VM architectures.
Lazy basic block versioning is a simple JIT compilation technique which
effectively removes redundant type checks from critical code paths. This novel
approach lazily generates type-specialized versions of basic blocks on-the-fly
while propagating context-dependent type information. This approach does not
require the use of costly program analyses, is not restricted by the precision
limitations of traditional type analyses.
This paper extends lazy basic block versioning to propagate type information
interprocedurally, across function call boundaries. Our implementation in a
JavaScript JIT compiler shows that across 26 benchmarks, interprocedural basic
block versioning eliminates more type tag tests on average than what is
achievable with static type analysis without resorting to code transformations.
On average, 94.3% of type tag tests are eliminated, yielding speedups of up to
56%. We also show that our implementation is able to outperform Truffle/JS on
several benchmarks, both in terms of execution time and compilation time.Comment: 10 pages, 10 figures, submitted to CGO 201
Experimental customisation of the Versioning Machine
This paper deals with challenges in adapting the XML-TEI publishing framework Versioning Machine to compositional drafts of 20th-century literary works and describes the main customisations that have been implemented to suit a genetic edition of poetry by Pedro Homem de Mello. The case study emphasises that even minimal customisations require technical work that may go beyond an editor’s skill.info:eu-repo/semantics/publishedVersio
Micro-CernVM: Slashing the Cost of Building and Deploying Virtual Machines
The traditional virtual machine building and and deployment process is
centered around the virtual machine hard disk image. The packages comprising
the VM operating system are carefully selected, hard disk images are built for
a variety of different hypervisors, and images have to be distributed and
decompressed in order to instantiate a virtual machine. Within the HEP
community, the CernVM File System has been established in order to decouple the
distribution from the experiment software from the building and distribution of
the VM hard disk images.
We show how to get rid of such pre-built hard disk images altogether. Due to
the high requirements on POSIX compliance imposed by HEP application software,
CernVM-FS can also be used to host and boot a Linux operating system. This
allows the use of a tiny bootable CD image that comprises only a Linux kernel
while the rest of the operating system is provided on demand by CernVM-FS. This
approach speeds up the initial instantiation time and reduces virtual machine
image sizes by an order of magnitude. Furthermore, security updates can be
distributed instantaneously through CernVM-FS. By leveraging the fact that
CernVM-FS is a versioning file system, a historic analysis environment can be
easily re-spawned by selecting the corresponding CernVM-FS file system
snapshot.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP)
Conference, Amsterda
Versioning Cultural Objects through the Text-Encoding of Folk Songs
This paper will present and discuss experiences studying different versions of folk songs as cultural objects, and will investigate how using specific Digital Humanities tools may assist the versioning of intangible oral tradition. This was primarily achieved using The Versioning Machine, a framework and an interface for displaying multiple versions of text and audio encoded according to the Text Encoding Initiative (TEI) Guidelines. Through encoding a set number of songs in The Versioning Machine and displaying the results online, new questions and conclusions could be made to version cultural material with an emphasis on trying to trace the evolution of cultural ideas through subsequent iterations of ideas. Using examples from the project Documenting Transmission: The Rake Cycle1, this paper will examine the effectiveness of using a specific existing versioning tool to model and map the differences between versions of folk songs and examine the intangible nature of performance and oral tradition. How do these digital versions change or reinforce our perception of a song cycle and transmission processes in general? This paper will give a broad overview of the Documenting Transmission project and some of the musicological and technical considerations that were made over the course of the project
- …