67 research outputs found

    Theory meets efficiency : a new implementation for proof trees

    Get PDF
    The implementation of a system for manipulating proof trees shows that the time spent on basic manipulations of proof trees and inference rules is critical, as well as the memory space used. This paper describes a new data structure, D-trees, to represent proof trees and inference rules. This structure is designed to optimize the time spent to find inference rules applicable to a subgoal of a proof, as well as applying such an inference rule to the proof tree. It is also designed to consume very small memory space. This implementation is very closely related to formulas as types theories, so that it becomes clearer to understand and easier to check

    Un constructeur d'arbres de preuve dirigé par des tactiques

    Get PDF
    Disponible dans les fichiers attachés à ce documen

    Partial evaluation with inference rules

    Get PDF
    Disponible dans les fichiers attachés à ce documen

    Renversement du flot de données et Garbage Collection

    Get PDF
    Data-Flow reversal is at the heart of Source-Transformation reverse Algorithmic Differentiation (Reverse ST-AD), arguably the most efficient way to obtain gradients of numerical models. However, when the model implementation language uses Garbage Collection (GC), for instance in Java or Python, the notion of address that is needed for Data-Flow reversal disappears. Moreover, GC is asynchronous and does not appear explicitly in the source. We present an extension to the model of Reverse ST-AD suitable for a language with GC. We validate this approach on a Java implementation of a simple Navier-Stokes solver. We compare performance with existing AD tools ADOL-C and Tapenade on an equivalent implementation in C and Fortran.Le renversement du flot de donnĂ©es est au centre du mode inverse de la DiffĂ©rentiation Algorithmique, qui est la maniĂšre la plus efficace d’obtenir le gradient d’un code numĂ©rique. Pourtant, quand le langage d’implĂ©mentation de ce code utilise le GC (Garbage Collection), par exemple en Java ou en Python, la notion d’adresse disparaĂźt alors que cette notion est nĂ©cessaire aux techniques classiques de renversement du flot de donnĂ©es. De plus, le GC est asynchrone et ne correspond pas Ă  une instruction explicite dans le code. Nous prĂ©sentons une extension du mode inverse de la DiffĂ©rentiation Algorithmique qui le rend compatible avec le GC. Nous validons ce modĂšle Ă©tendu sur un solveur Navier-Stokes simple implĂ©mentĂ© en Java. Nous comparons les performances avec celles d’outils de DA existants tels que ADOL-C et Tapenade, appliquĂ©s Ă  des implĂ©mentations Ă©quivalentes du mĂȘme solveur rĂ©Ă©crit en C et Fortran

    Sur l'application correcte de checkpointing de la DA aux programmes parallĂšles MPI adjoints

    Get PDF
    Checkpointing is a classical technique to mitigate the overhead of adjoint Algorithmic Differentiation (AD). In the context of source transformation AD with the Store-All approach, checkpointing reduces the peak memory consumption of the adjoint, at the cost of duplicate runs of selected pieces of the code. Checkpointing is vital for long run-time codes, which is the case of most MPI parallel applications. However, the presence of MPI communications seriously restricts application of checkpointing. In most attempts to apply checkpointing to adjoint MPI codes (the ``popular'' approach), a number of restrictions apply on the form of communications that occur in the checkpointed piece of code. In many works, these restrictions are not explicit, and an application that does not respect these restrictions may produce erroneous code. We propose techniques to apply checkpointing to adjoint MPI codes, that either do not suppose these restrictions, or explicit them so that the end users can verify their applicability. These techniques rely on both adapting the snapshot mechanism of checkpointing and on modifying the behavior of communication calls. One technique is based on logging the values received, so that the duplicated communications need not take place.Although this technique completely lifts restrictions on checkpointing MPI codes, message logging makes it more costly than the popular approach. However, we can refine this technique to blend message logging and communications duplication whenever it is possible, so that the refined technique now encompasses the popular approach.We provide elements of proof of correction of our refined technique, i.e. that it preserves the semantics of the adjoint code and that it doesn't introduce deadlocks.Le ``Checkpointing'' est une technique classique pour attĂ©nuer le surcoĂ»t de la diffĂ©rentiation algorithmique adjointe (DA). Dans le contexte de la DA par transformation de source avec l'approche Store-All, le checkpointing rĂ©duit le pic de consommation de mĂ©moire de l'adjoint au prix d'exĂ©cutions dupliquĂ©es de morceaux de code sĂ©lectionnĂ©s. Le checkpointing est vital pour les codes de temps d'exĂ©cution long, ce qui est le cas de la plupart des applications parallĂšles MPI. Cependant, la prĂ©sence des communications MPI restreint sĂ©rieusement l'application du checkpointing. Dans la plupart des tentatives pour appliquer le checkpointing aux codes MPI adjoints (l'approche ``populaire''), certaines restrictions s'appliquent sur la forme des communications qui ont lieu dans le morceau de code apres le checkpointing. Dans plusieurs travaux, ces restrictions ne sont pas explicites, et une application qui ne respecte pas ces restrictions peut produire des rĂ©sultats erronĂ©s. Nous proposons des techniques pour appliquer le checkpointing aux codes MPI adjoints, qui ou bien ne supposent pas ces restrictions, ou bien les explicitent de telle façon que les utilisateurs finaux peuvent vĂ©rifier leur applicabilitĂ©. Ces techniques utilisent Ă  la fois l'adaptation des mĂ©canismes de snapshot de checkpoining et sur la modification de comportement des appels de communications. Une technique est fondĂ©e sur l'enregistrement des valeurs reçues, de telle façon que les communications n'ont pas besoin d'ĂȘtre rĂ©pĂ©tĂ©es. MalgrĂ© que cette technique lĂšve complĂštement les restrictions sur le checkpointing des codes MPI, l'enregistrement des messages la rend plus coĂ»teuse que l'approche populaire. Cependant, nous pouvons raffiner cette technique pour remplacer l'enregistrement des valeurs par la duplication des communications Ă  chaque fois que c'est possible, de telle façon que la technique raffinĂ©e englobe maintenant l'approche populaire. Nous fournissons des Ă©lĂ©ments de preuve de correction de notre technique raffinĂ©e, Ă  savoir qu'elle prĂ©serve la sĂ©mantique du code adjoint et qu'elle n'introduit pas de deadlock

    The Tapenade Automatic Differentiation tool: principles, model, and specification

    Get PDF
    International audienceTapenade is an Automatic Differentiation tool which, given a Fortran or C code that computes a function, creates a new code that computes its tangent or adjoint derivatives. Tapenade puts particular emphasis on adjoint differentiation, which computes gradients at a remarkably low cost. This paper describes the principles of Tapenade, a subset of the general principles of AD. We motivate and illustrate on examples the AD model of Tapenade, i.e. the structure of differentiated codes and the strategies used to make them more efficient. Along with this informal description, we formally specify this model by means of Data-Flow equations and rules of Operational Semantics, making this the reference specification of the tangent and adjoint modes of Tapenade. One benefit we expect from this formal specification is the capacity to study formally the AD model itself, especially for the adjoint mode and its sophisticated strategies. This paper also describes the architectural choices of the implementation of Tapenade. We describe the current performances of Tapenade on a set of codes that include industrial-size applications. We present the extensions of the tool that are planned in a foreseeable future, deriving from our ongoing research on AD

    Enabling user-driven Checkpointing strategies in Reverse-mode Automatic Differentiation

    Get PDF
    This paper presents a new functionality of the Automatic Differentiation (AD) tool Tapenade. Tapenade generates adjoint codes which are widely used for optimization or inverse problems. Unfortunately, for large applications the adjoint code demands a great deal of memory, because it needs to store a large set of intermediates values. To cope with that problem, Tapenade implements a sub-optimal version of a technique called checkpointing, which is a trade-off between storage and recomputation. Our long-term goal is to provide an optimal checkpointing strategy for every code, not yet achieved by any AD tool. Towards that goal, we first introduce modifications in Tapenade in order to give the user the choice to select the checkpointing strategy most suitable for their code. Second, we conduct experiments in real-size scientific codes in order to gather hints that help us to deduce an optimal checkpointing strategy. Some of the experimental results show memory savings up to 35% and execution time up to 90%

    Natural semantics on the computer

    Get PDF
    Disponible dans les fichiers attachés à ce documen

    An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    Get PDF
    International audienceWe apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of 5 Christianson [1994] for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving per-10 formance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a " hybrid " shallow ice / shallow shelf approximation to the Stokes equations

    Using Automatic Differentiation to study the sensitivity of a crop model

    Get PDF
    Automatic Differentiation methods are often applied to codes that solve partial differential equations, e.g. in the domains of geophysical sciences, such as meteorology or oceanography, or Computational Fluid Dynamics. In agronomy, the differentiation of crop model has never been performed since the models are not fully deterministic but much more empirical. This study shows the feasability of constructing the adjoint model of a crop model referent in the agronomic community (STICS) with the TAPENADE tool, and the use of this adjoint to perform some robust sensitivity analysis. This aims at giving a return of experience from users working in the environmental thematic, and presents a somewhat unusual field of application of Automatic Differentiation
    • 

    corecore