6 research outputs found

    Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients

    Full text link
    Applying differentiable programming techniques and machine learning algorithms to foreign programs requires developers to either rewrite their code in a machine learning framework, or otherwise provide derivatives of the foreign code. This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework capable of synthesizing gradients of statically analyzable programs expressed in the LLVM intermediate representation (IR). Enzyme synthesizes gradients for programs written in any language whose compiler targets LLVM IR including C, C++, Fortran, Julia, Rust, Swift, MLIR, etc., thereby providing native AD capabilities in these languages. Unlike traditional source-to-source and operator-overloading tools, Enzyme performs AD on optimized IR. On a machine-learning focused benchmark suite including Microsoft's ADBench, AD on optimized IR achieves a geometric mean speedup of 4.5x over AD on IR before optimization allowing Enzyme to achieve state-of-the-art performance. Packaging Enzyme for PyTorch and TensorFlow provides convenient access to gradients of foreign code with state-of-the art performance, enabling foreign code to be directly incorporated into existing machine learning workflows.Comment: To be published in NeurIPS 202

    Cheaper Adjoints by Reversing Address Computations

    Get PDF

    The Tapenade Automatic Differentiation tool: principles, model, and specification

    Get PDF
    International audienceTapenade is an Automatic Differentiation tool which, given a Fortran or C code that computes a function, creates a new code that computes its tangent or adjoint derivatives. Tapenade puts particular emphasis on adjoint differentiation, which computes gradients at a remarkably low cost. This paper describes the principles of Tapenade, a subset of the general principles of AD. We motivate and illustrate on examples the AD model of Tapenade, i.e. the structure of differentiated codes and the strategies used to make them more efficient. Along with this informal description, we formally specify this model by means of Data-Flow equations and rules of Operational Semantics, making this the reference specification of the tangent and adjoint modes of Tapenade. One benefit we expect from this formal specification is the capacity to study formally the AD model itself, especially for the adjoint mode and its sophisticated strategies. This paper also describes the architectural choices of the implementation of Tapenade. We describe the current performances of Tapenade on a set of codes that include industrial-size applications. We present the extensions of the tool that are planned in a foreseeable future, deriving from our ongoing research on AD

    Knowledge-Based Automatic Generation of Linear Algebra Algorithms and Code

    Get PDF
    This dissertation focuses on the design and the implementation of domain-specific compilers for linear algebra matrix equations. The development of efficient libraries for such equations, which lie at the heart of most software for scientific computing, is a complex process that requires expertise in a variety of areas, including the application domain, algorithms, numerical analysis and high-performance computing. Moreover, the process involves the collaboration of several people for a considerable amount of time. With our compilers, we aim to relieve the developers from both designing algorithms and writing code, and to generate routines that match or even surpass the performance of those written by human experts.Comment: Dissertatio

    Advanced Concepts for Automatic Differentiation based on Operator Overloading

    Get PDF
    Mit Hilfe der Technik des Automatischen Differenzierens (AD) lassen sich für Funktionen, die als Programmquellcode gegeben sind, Ableitungsinformationen rechentechnisch effizient und mit geringem Aufwand für den Nutzer bereitstellen. Eine Variante der Implementierung von AD basiert auf der Überladung von Operatoren und Funktionen, die von vielen modernen Programmiersprachen ermöglicht wird. Durch Ausnutzung des Konzepts der Überladung wird eine interne Funktions-Repräsentation (Tape) generiert, die anschließend für die Ableitungsberechnung herangezogen wird. In der Dissertation werden neue Techniken erarbeitet, die eine effizientere Tape-Erstellung und die parallele Tape-Auswertung ermöglichen. Anhand von Laufzeituntersuchungen für numerische Beispiele werden die Möglichkeiten der neuen Techniken verdeutlicht.Using the technique of Automatic Differentiation (AD), derivative information can be computed efficiently for any function that is given as source code in a supported programming languages. One basic implementation strategy is based on the concept of operator overloading that is available for many programming languages. Due the overloading of operators, an internal representation of the function can be generated at runtime. This so-called tape can then be used for computing derivatives. In the thesis, new techniques are introduced that allow a more efficient tape creation and the parallel evaluation of tapes. Advantages of the new techniques are demonstrated by means of runtime analyses for numerical examples

    To be recorded” analysis in reverse-mode automatic differentiation

    No full text
    The automatic generation of adjoints of mathematical models that are implemented as computer programs is receiving increased attention in the scientific and engineering communities. Reverse-mode automatic differentiation is of particular interest for large-scale optimization problems. It allows the computation of gradients at a small constant multiple of the cost for evaluating the objective function itself, independent of the number of input parameters. Source-to-source transformation tools apply simple differentiation rules to generate adjoint codes based on the adjoint version of every statement. In order to guarantee correctness, certain values that are computed and overwritten in the original program must be made available in the adjoint program. For their determination we introduce a static dataflow analysis called “to be recorded ” analysis. Possible overestimation of this set must be kept minimal to get efficient adjoint codes. This efficiency is essential for the applicability of source-to-source transformation tools to real-world applications. 1 Automatically Generated Adjoints We consider a computer program P evaluating a vector function y = F (x), where F: IR n → IR m. Usually, P implements the mathematical model of some underlying real-world application and it is referred to as the original code. The goal of automatic differentiation (AD) [3,7,15] by source transformation is to build automatically a new source program P ′ evaluating some derivatives of F. This is arrow AD in Figure 1. We consider a simplified mathematical model, symbolized by arrow R in Figure 1: Every particular run of P on a particular set of inputs is equivalent to a simple sequence of p scalar assignments vj = ϕj(vk)k≺j, j = 1,..., q, (1
    corecore