12 research outputs found

    Simple optimizing JIT compilation of higher-order dynamic programming languages

    Get PDF
    ImplĂ©menter efficacement les langages de programmation dynamiques demande beaucoup d’effort de dĂ©veloppement. Les compilateurs ne cessent de devenir de plus en plus complexes. Aujourd’hui, ils incluent souvent une phase d’interprĂ©tation, plusieurs phases de compilation, plusieurs reprĂ©sentations intermĂ©diaires et des analyses de code. Toutes ces techniques permettent d’implĂ©menter efficacement un langage de programmation dynamique, mais leur mise en oeuvre est difficile dans un contexte oĂč les ressources de dĂ©veloppement sont limitĂ©es. Nous proposons une nouvelle approche et de nouvelles techniques dynamiques permettant de dĂ©velopper des compilateurs performants pour les langages dynamiques avec de relativement bonnes performances et un faible effort de dĂ©veloppement. Nous prĂ©sentons une approche simple de compilation Ă  la volĂ©e qui permet d’implĂ©menter un langage en une seule phase de compilation, sans transformation vers des reprĂ©sentations intermĂ©diaires. Nous expliquons comment le versionnement de blocs de base, une technique de compilation existante, peut ĂȘtre Ă©tendue, sans effort de dĂ©veloppement significatif, pour fonctionner interprocĂ©duralement avec les langages de programmation d’ordre supĂ©rieur, permettant d’appliquer des optimisations interprocĂ©durales sur ces langages. Nous expliquons Ă©galement comment le versionnement de blocs de base permet de supprimer certaines opĂ©rations utilisĂ©es pour implĂ©menter les langages dynamiques et qui impactent les performances comme les vĂ©rifications de type. Nous expliquons aussi comment les compilateurs peuvent exploiter les reprĂ©sentations dynamiques des valeurs par Tagging et NaN-boxing pour optimiser le code gĂ©nĂ©rĂ© avec peu d’effort de dĂ©veloppement. Nous prĂ©sentons Ă©galement notre expĂ©rience de dĂ©veloppement d’un compilateur Ă  la volĂ©e pour le langage de programmation Scheme, pour montrer que ces techniques permettent effectivement de construire un compilateur avec un effort moins important que les compilateurs actuels et qu’elles permettent de gĂ©nĂ©rer du code efficace, qui rivalise avec les meilleures implĂ©mentations du langage Scheme.Efficiently implementing dynamic programming languages requires a significant development effort. Over the years, compilers have become more complex. Today, they typically include an interpretation phase, several compilation phases, several intermediate representations and code analyses. These techniques allow efficiently implementing these programming languages but are difficult to implement in contexts in which development resources are limited. We propose a new approach and new techniques to build optimizing just-in-time compilers for dynamic languages with relatively good performance and low development effort. We present a simple just-in-time compilation approach to implement a language with a single compilation phase, without the need to use code transformations to intermediate representations. We explain how basic block versioning, an existing compilation technique, can be extended without significant development effort, to work interprocedurally with higherorder programming languages allowing interprocedural optimizations on these languages. We also explain how basic block versioning allows removing operations used to implement dynamic languages that degrade performance, such as type checks, and how compilers can use Tagging and NaN-boxing to optimize the generated code with low development effort. We present our experience of building a JIT compiler using these techniques for the Scheme programming language to show that they indeed allow building compilers with less development effort than other implementations and that they allow generating efficient code that competes with current mature implementations of the Scheme language

    Design and optimisation of scientific programs in a categorical language

    Get PDF
    This thesis presents an investigation into the use of advanced computer languages for scientific computing, an examination of performance issues that arise from using such languages for such a task, and a step toward achieving portable performance from compilers by attacking these problems in a way that compensates for the complexity of and differences between modern computer architectures. The language employed is Aldor, a functional language from computer algebra, and the scientific computing area is a subset of the family of iterative linear equation solvers applied to sparse systems. The linear equation solvers that are considered have much common structure, and this is factored out and represented explicitly in the lan-guage as a framework, by means of categories and domains. The flexibility introduced by decomposing the algorithms and the objects they act on into separate modules has a strong performance impact due to its negative effect on temporal locality. This necessi-tates breaking the barriers between modules to perform cross-component optimisation. In this instance the task reduces to one of collective loop fusion and array contrac

    An architecture for interpreted dynamic object-oriented languages

    Get PDF
    e code refers to the representation of bytecodes as constant C arrays that are located in sharable text segments after compilation. Interoperability, application start-up and dynamic memory usage benefit from this representation. Indexed code threading addresses the performance problem with virtual instruction mapping (i.e. loading, decoding and invoking) by using a fast threaded instruction transfer. Unlike with standard code threading, virtual machine code remains compact and executable also with a non-threaded virtual machine emulator. A further performance boost is achieved with optimal virtual instruction ordering. This technique helps to cluster the native code implementing virtual instructions so that native instruction cache performance is increased. Finally, the efficiency problem involved with dynamic method lookup is alleviated with an inline caching scheme that is applicable with constant bytecode vectors. The scheme exploits type locality similar to polymorphic inline cac

    Analysing and Reducing Costs of Deep Learning Compiler Auto-tuning

    Get PDF
    Deep Learning (DL) is significantly impacting many industries, including automotive, retail and medicine, enabling autonomous driving, recommender systems and genomics modelling, amongst other applications. At the same time, demand for complex and fast DL models is continually growing. The most capable models tend to exhibit highest operational costs, primarily due to their large computational resource footprint and inefficient utilisation of computational resources employed by DL systems. In an attempt to tackle these problems, DL compilers and auto-tuners emerged, automating the traditionally manual task of DL model performance optimisation. While auto-tuning improves model inference speed, it is a costly process, which limits its wider adoption within DL deployment pipelines. The high operational costs associated with DL auto-tuning have multiple causes. During operation, DL auto-tuners explore large search spaces consisting of billions of tensor programs, to propose potential candidates that improve DL model inference latency. Subsequently, DL auto-tuners measure candidate performance in isolation on the target-device, which constitutes the majority of auto-tuning compute-time. Suboptimal candidate proposals, combined with their serial measurement in an isolated target-device lead to prolonged optimisation time and reduced resource availability, ultimately reducing cost-efficiency of the process. In this thesis, we investigate the reasons behind prolonged DL auto-tuning and quantify their impact on the optimisation costs, revealing directions for improved DL auto-tuner design. Based on these insights, we propose two complementary systems: Trimmer and DOPpler. Trimmer improves tensor program search efficacy by filtering out poorly performing candidates, and controls end-to-end auto-tuning using cost objectives, monitoring optimisation cost. Simultaneously, DOPpler breaks long-held assumptions about the serial candidate measurements by successfully parallelising them intra-device, with minimal penalty to optimisation quality. Through extensive experimental evaluation of both systems, we demonstrate that they significantly improve cost-efficiency of autotuning (up to 50.5%) across a plethora of tensor operators, DL models, auto-tuners and target-devices

    Flow-directed Inlining

    No full text
    A flow-directed inlining strategy uses information derived from control-flow analysis to specialize and inline procedures for functional and object-oriented languages. Since it uses control-flow analysis to identify candidate call sites, flowdirected inlining can inline procedures whose relationships to their call sites are not apparent. For instance, procedures defined in other modules, passed as arguments, returned as values, or extracted from data structures can all be inlined. Flow-directed inlining specializes procedures for particular call sites, and can selectively inline a particular procedure at some call sites but not at others. Finally, flow-directed inlining encourages modular implementations: control-flow analysis, inlining, and post-inlining optimizations are all orthogonal components. Results from a prototype implementation indicate that this strategy effectively reduces procedure call overhead and leads to significant reduction in execution time. 1 Introduction Functio..
    corecore