5 research outputs found
Elimination Techniques for Algorithmic Differentiation Revisited
All known elimination techniques for (first-order) algorithmic
differentiation (AD) rely on Jacobians to be given for a set of relevant
elemental functions. Realistically, elemental tangents and adjoints are given
instead. They can be obtained by applying software tools for AD to the parts of
a given modular numerical simulation. The novel generalized face elimination
rule proposed in this article facilitates the rigorous exploitation of
associativity of the chain rule of differentiation at arbitrary levels of
granularity ranging from elemental scalar (state of the art) to multivariate
vector functions with given elemental tangents and adjoints. The implied
combinatorial Generalized Face Elimination problem asks for a face elimination
sequence of minimal computational cost. Simple branch and bound and greedy
heuristic methods are employed as a baseline for further research into more
powerful algorithms motivated by promising first test results. The latter can
be reproduced with the help of an open-source reference implementation
Source-to-Source Automatic Differentiation of OpenMP Parallel Loops
International audienceThis paper presents our work toward correct and efficient automatic differentiation of OpenMP parallel worksharing loops in forward and reverse mode. Automatic differentiation is a method to obtain gradients of numerical programs, which are crucial in optimization, uncertainty quantification, and machine learning. The computational cost to compute gradients is a common bottleneck in practice. For applications that are parallelized for multicore CPUs or GPUs using OpenMP, one also wishes to compute the gradients in parallel. We propose a framework to reason about the correctness of the generated derivative code, from which we justify our OpenMP extension to the differentiation model. We implement this model in the automatic differentiation tool Tapenade and present test cases that are differentiated following our extended differentiation procedure. Performance of the generated derivative programs in forward and reverse mode is better than sequential, although our reverse mode often scales worse than the input programs
Source-to-Source Automatic Differentiation of OpenMP Parallel Loops
International audienceThis paper presents our work toward correct and efficient automatic differentiation of OpenMP parallel worksharing loops in forward and reverse mode. Automatic differentiation is a method to obtain gradients of numerical programs, which are crucial in optimization, uncertainty quantification, and machine learning. The computational cost to compute gradients is a common bottleneck in practice. For applications that are parallelized for multicore CPUs or GPUs using OpenMP, one also wishes to compute the gradients in parallel. We propose a framework to reason about the correctness of the generated derivative code, from which we justify our OpenMP extension to the differentiation model. We implement this model in the automatic differentiation tool Tapenade and present test cases that are differentiated following our extended differentiation procedure. Performance of the generated derivative programs in forward and reverse mode is better than sequential, although our reverse mode often scales worse than the input programs