24 research outputs found

    Adjoint Venture: Fast Greeks with Adjoint Algorithmic Differentiation

    Get PDF
    This dissertation seeks to discuss the adjoint approach to solving affine recursion problems (ARPs) in the context of computing sensitivities of financial instruments. It is shown how, by moving from an intuitive 'forward' approach to solving a recursion to an 'adjoint' approach, one might dramatically increase the computational efficiency of algorithms employed to compute sensitivities via the pathwise derivatives approach in a Monte Carlo setting. Examples are illustrated within the context of the Libor Market Model. Furthermore, these ideas are extended to the paradigm of Adjoint Algorithmic Differentiation, and it is illustrated how the use of sophisticated techniques within this space can further improve the ease of use and efficiency of sensitivity calculations

    An extended collection of matrix derivative results for forward and reverse mode automatic differentiation

    Get PDF
    This paper collects together a number of matrix derivative results which are very useful in forward and reverse mode algorithmic differentiation (AD). It highlights in particular the remarkable contribution of a 1948 paper by Dwyer and Macphail which derives the linear and adjoint sensitivities of a matrix product, inverse and determinant, and a number of related results motivated by applications in multivariate analysis in statistics.\ud \ud This is an extended version of a paper which will appear in the proceedings of AD2008, the 5th International Conference on Automatic Differentiation

    Algorithmic Sensitivity Analysis in the Climate Model Climber 2

    Get PDF
    This report summarizes the results of the project Algosense performed at the Institut für Mathematik, Technische Universität Berlin, from July 2001 to June 2002. Aim of the project was to analyze the applicability of tools for Algorithmic (or Automatic) Differentiation (AD) to two climate models developed at the Potsdam Institute for Climate Impact Research (PIK). These were the so-called Box Model, a small model of the North Atlantic stream, and the more complex model Climber 2 which is a so-called model of intermediate complexity consisting of atmosphere, ocean, ice, and vegetation components. Applications that are considered start from pure sensitivity calculations over uncertainty estimations to optimization runs. First and higher order derivatives are of interest. The outline of this report is the following: In the next section we describe the basic tools and techniques of Algorithmic Differentiation. The following two sections deal with the two models studied in this project. In each of them the corresponding model and its special features important for Algorithmic Differentiation are briefly introduced. Then the used Algorithmic Differentiation tools and technical details of the AD process are presented. At last numerical results are given. Further emphasis is put on the necessary code preparations to apply the AD tools. The last section of the report gives a summary and deals with the perspectives and opportunities of the application of Algorithmic Differentiation to these and maybe other climate models

    Multicomplex number class for Matlab, with a focus on the accurate calculation of small imaginary terms for multicomplex step sensitivity calculations

    Get PDF
    A Matlab class for multicomplex numbers was developed with particular attention paid to the robust and accurate handling of smallimaginary components. This is primarily to allow the class to be used to obtainn-order derivative information using the multicomplexstep method for, amongst other applications, gradient-based optimization and optimum control problems. The algebra of multicomplexnumbers is described as is its accurate computational implementation, considering small term approximations and the identification ofprinciple values. The implementation of the method in Matlab is studied, and a class definition is constructed. This new class definitionenables Matlab to handlen-order multicomplex numbers, and perform arithmetic functions. It was found that with this method, thestep size could be arbitrarily decreased toward machine precision. Use of the method to obtain up to the 7th derivative of functions ispresented, as is timing data to demonstrate the efficiency of the class implementation

    Applications de la différentiation automatique à la programmation non linéaire

    Get PDF
    Dans ce mémoire, nous introduisons d'abord les concepts et techniques de la différentiation automatique, une méthode de calcul des dérivées d'une fonction [ensemble des réels][indice supérieur n] [flèche vers la droite] [ensemble des réels][indice supérieur m] précise et efficace. Nous présentons le design et l'implantation dans l'environnement Scilab d'un outil logiciel modulaire basé sur ces techniques. Nous présentons ensuite une difficulté numérique affectant les méthodes de descente en optimisation sans contrainte de fonctions différentiables. Nous formulons cette difficulté en termes d'erreurs d'annulation survenant dans le calcul de différences finies de la fonction objectif. Enfin, nous proposons une solution basée sur les techniques de différentiation automatique, qui consiste en un calculateur de différences finies automatiques. La performance de cette solution est illustrée à l'aide d'expériences numériques sur une banque de problèmes de laboratoire, expériences réalisées avec des codes d'optimisation publics. Nous complétons l'étude des différences finies automatiques en comparant empiriquement leur usage pour le calcul de dérivées directionnelles aux techniques traditionnelles de dérivation par différentiation automatique

    Efficient Jacobian Determination by Structure-Revealing Automatic Differentiation

    Get PDF
    This thesis is concerned with the efficient computation of Jacobian matrices of nonlinear vector maps using automatic differentiation (AD). Specifically, we propose the use of two directed edge separator methods, the weighted minimum separator and natural order separator methods, to exploit the structure of the computational graph of the nonlinear system.This allows for the efficient determination of the Jacobian matrix using AD software. We will illustrate the promise of this approach with computational experiments

    The efficient application of automatic differentiation for computing gradients in financial applications

    Get PDF
    Automatic differentiation (AD) is a practical field of computational mathematics that is of growing interest across many industries, including finance. The use of reverse mode AD is particularly interesting, since it allows for the computation of gradients in the same time required to evaluate the objective function itself. However, it requires excessive memory. This memory requirement can make reverse-mode AD infeasible in some cases (depending on the function complexity and available RAM) and slower than expected in others, due to the use of secondary memory and non-localized memory references. However, it turns out that many complex (expensive) functions in finance exhibit a natural substitution structure. In this paper, we illustrate this structure in computational finance as it arises in calibration and inverse problems, and determine Greeks in a Monte Carlo setting. In these cases, the required memory is a small fraction of that required by reverse-mode AD, but the computing time complexity is the same. In fact, our results indicate a significant realized speedup compared with straight reverse-mode AD
    corecore