119 research outputs found

    Simple and Effective Type Check Removal through Lazy Basic Block Versioning

    Get PDF
    Dynamically typed programming languages such as JavaScript and Python defer type checking to run time. In order to maximize performance, dynamic language VM implementations must attempt to eliminate redundant dynamic type checks. However, type inference analyses are often costly and involve tradeoffs between compilation time and resulting precision. This has lead to the creation of increasingly complex multi-tiered VM architectures. This paper introduces lazy basic block versioning, a simple JIT compilation technique which effectively removes redundant type checks from critical code paths. This novel approach lazily generates type-specialized versions of basic blocks on-the-fly while propagating context-dependent type information. This does not require the use of costly program analyses, is not restricted by the precision limitations of traditional type analyses and avoids the implementation complexity of speculative optimization techniques. We have implemented intraprocedural lazy basic block versioning in a JavaScript JIT compiler. This approach is compared with a classical flow-based type analysis. Lazy basic block versioning performs as well or better on all benchmarks. On average, 71% of type tests are eliminated, yielding speedups of up to 50%. We also show that our implementation generates more efficient machine code than TraceMonkey, a tracing JIT compiler for JavaScript, on several benchmarks. The combination of implementation simplicity, low algorithmic complexity and good run time performance makes basic block versioning attractive for baseline JIT compilers

    Interprocedural Type Specialization of JavaScript Programs Without Type Analysis

    Get PDF
    Dynamically typed programming languages such as Python and JavaScript defer type checking to run time. VM implementations can improve performance by eliminating redundant dynamic type checks. However, type inference analyses are often costly and involve tradeoffs between compilation time and resulting precision. This has lead to the creation of increasingly complex multi-tiered VM architectures. Lazy basic block versioning is a simple JIT compilation technique which effectively removes redundant type checks from critical code paths. This novel approach lazily generates type-specialized versions of basic blocks on-the-fly while propagating context-dependent type information. This approach does not require the use of costly program analyses, is not restricted by the precision limitations of traditional type analyses. This paper extends lazy basic block versioning to propagate type information interprocedurally, across function call boundaries. Our implementation in a JavaScript JIT compiler shows that across 26 benchmarks, interprocedural basic block versioning eliminates more type tag tests on average than what is achievable with static type analysis without resorting to code transformations. On average, 94.3% of type tag tests are eliminated, yielding speedups of up to 56%. We also show that our implementation is able to outperform Truffle/JS on several benchmarks, both in terms of execution time and compilation time.Comment: 10 pages, 10 figures, submitted to CGO 201

    Interprocedural Specialization of Higher-Order Dynamic Languages Without Static Analysis

    Get PDF
    Function duplication is widely used by JIT compilers to efficiently implement dynamic languages. When the source language supports higher order functions, the called function\u27s identity is not generally known when compiling a call site, thus limiting the use of function duplication. This paper presents a JIT compilation technique enabling function duplication in the presence of higher order functions. Unlike existing techniques, our approach uses dynamic dispatch at call sites instead of relying on a conservative analysis to discover function identity. We have implemented the technique in a JIT compiler for Scheme. Experiments show that it is efficient at removing type checks, allowing the removal of almost all the run time type checks for several benchmarks. This allows the compiler to generate code up to 50% faster. We show that the technique can be used to duplicate functions using other run time information opening up new applications such as register allocation based duplication and aggressive inlining

    Return to Play Following Shoulder Stabilization: A Systematic Review and Meta-analysis.

    Get PDF
    BackgroundAnterior shoulder instability can be a disabling condition for the young athlete; however, the best surgical treatment remains controversial. Traditionally, anterior shoulder instability was treated with open stabilization. More recently, arthroscopic repair of the Bankart injury with suture anchor fixation has become an accepted technique.HypothesisNo systematic reviews have compared the rate of return to play following arthroscopic Bankart repair with suture anchor fixation with the Bristow-Latarjet procedure and open stabilization. We hypothesized that the rate of return to play will be similar regardless of surgical technique.Study designSystematic review; Level of evidence, 4.MethodsWe performed a systematic review and meta-analysis focused on return to play following shoulder stabilization. Inclusion criteria included studies in English that reported on rate of return to play and clinical outcomes following primary arthroscopic Bankart repair with suture anchors, the Latarjet procedure, or open stabilization. Statistical analyses included Student t tests and analyses of variance.ResultsSixteen papers reporting on 1036 patients were included. A total of 545 patients underwent arthroscopic Bankart repair with suture anchors, 353 with the Latarjet procedure, and 138 with open repair. No significant difference was found in patient demographic data among the studies. Patients returned to sport at the same level of play (preinjury level) more consistently following arthroscopic Bankart repair (71%) or the Latarjet procedure (73%) than open stabilization (66%) (P < .05). Return to play at any level and postoperative Rowe scores were not significantly different among studies. Recurrent dislocation was significantly less following the Latarjet procedure (3.5%) than after arthroscopic Bankart repair (6.6%) or open stabilization (6.7%) (P < .05).ConclusionThis systematic review demonstrates a greater rate of return to play at the preinjury level following arthroscopic Bankart repair and the Latarjet procedure than open stabilization. Despite this difference, >65% of all treated athletes returned to sport at their preinjury levels, with other outcome measures being similar among the treatment groups. Therefore, arthroscopic Bankart repair, the Latarjet procedure, and open stabilization remain good surgical options in the treatment of the athlete with anterior shoulder instability

    A R4RS Compliant REPL in 7 KB

    Full text link
    The Ribbit system is a compact Scheme implementation running on the Ribbit Virtual Machine (RVM) that has been ported to a dozen host languages. It supports a simple Foreign Function Interface (FFI) allowing extensions to the RVM directly from the program's source code. We have extended the system to offer conformance to the R4RS standard while staying as compact as possible. This leads to a R4RS compliant REPL that fits in an 7 KB Linux executable. This paper explains the various issues encountered and our solutions to make, arguably, the smallest R4RS conformant Scheme implementation of all time.Comment: Presented at The 2023 Scheme and Functional Programming Workshop (arXiv:cs/0101200

    Benchmarking implementations of functional languages with ‘Pseudoknot', a float-intensive benchmark

    Get PDF
    Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical' applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varie
    • …
    corecore