Efficient computation of structured gradients using automatic differentiation

Abstract

The advent of robust automatic differentiation tools is an exciting and important development in scientific computing. It is particularily noteworthy that the gradient of a scalar-valued function of many variables can be computed with essentially the same time complexity as required to evaluate the function itself. This is true, in theory, when the "reverse mode" of automatic differentiation is used (whereas the "forward mode" introduces an additional factor corresponding to the problem dimension). However, in practise performance on large problems can be significantly (and unacceptably) worse than predicted. In this paper we illustrate that when natural structure is exploited fast gradient computation can be recovered, even for large dimensional problems

    Similar works

    Full text

    thumbnail-image

    Available Versions