The Numerical Algorithms Group (NAG) work very closely with Uwe Naumann 1 to help users take advantage of Algorithmic Differentiation methods. Algorithmic (also known as Automatic) differentiation (AD) is a method for computing sensitivities of outputs of numerical programs with respect to its inputs both accurately (to machine precision) and efficiently. The two basic modes of AD – forward and reverse – and combinations thereof yield products of a vector with the Jacobian, its transpose, or the Hessian, respectively. Numerical simulation plays a central role in computational finance as well as computational science and engineering. Gradients, (projected) Jacobians, (projected) Hessians or even higher-order sensitivities are required in order to make the highly desirable transition from pure simulation to optimization of the numerical model or its parameters. Such quantities can be computed to machine accuracy by AD  as demonstrated by a large number of successful applications [1, 2, 3]. AD deals with implementations of multivariate nonlinear vector functions F: IR n → IR m as computer programs. Let y = F (x). Denote the Jacobian by F ′ ≡ F ′ (x) and the Hessian by F ′ ′ ≡ F ′ ′ (x). The forward mode of AD transforms F into the tangent-linear model F ˙ ( ↓ x, ↓ ˙x, ˙y) where ˙y = F ′ · ˙x. An overset downarrow marks an input. Outputs are marked with an underset downarrow. The columns of F ′ can be computed by letting ˙x range over the Cartesian basis vectors in IR n. The computational complexity of accumulating the whole Jacobian using a tangent-linear model is of the same order O(n) as that of numerical approximation using finite difference quotients (“bumping”). A single directional first-order sensitivity (e.g. a projection of Delta in some direction) can be obtained at roughly twice to three times the cost of evaluating F. The reverse mode of AD transforms F into the adjoint model ¯F ( ↓ x,
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.