146 research outputs found

    Algorithmic Differentiation Through Automatic Graph Elimination Ordering (ADTAGEO)

    Get PDF
    Algorithmic Differentiation Through Automatic Graph Elimination Ordering (ADTAGEO) is based on the principle of Instant Elimination: At runtime we dynamically maintain a DAG representing only active variables that are alive at any time. Whenever an active variable is deallocated or its value is overwritten the corresponding vertex in the Live-DAG will be eliminated immediately by the well known vertex elimination rule [1]. Consequently, the total memory requirement is equal to that of the sparse forward mode. Assuming that local variables are destructed in the opposite order of their construction (as in C++), a single assignment code is in effect differentiated in reverse mode. If compiler-generated temporaries are destroyed in reverse order too, then Instant Elimination yields the statement level reverse mode of ADIFOR [2] naturally. The user determines the elimination order intentionally (or unintentionally) by the order in which he declares variables, which makes hybrid modes of AD possible by combining forward and reverse differentiated parts. By annotating the Live-DAG with local Hessians and applying second order elimination rules, Hessian-vector products can be computed efficiently since the annotated Live-DAG stores one half of the symmetric Hessian graph only (as suggested in [1]). Nested automatic differentiation is done easily by subsequent propagations, since sensitivities between variables alive can be obtained at any point in time within the Live-DAG. The concept of maintaining a Live-DAG fits optimally into the strategy of overloaded operators for classes, it is a very natural example of Object Oriented Programming. A proof-of-concept implementation in C++ is available (contact the first author). References 1. Griewank, A.: Evaluating Derivatives. Principles and Techniques of Algorithmic Differentiation. SIAM (2000) 2.Bischof, C.H., Carle, A., Khademi, P., Mauer, A.: ADIFOR 2.0: Automatic differentiation of Fortran 77 programs. IEEE Computational Science & Engineering 3 (1996) 18-3

    Applicability of Quasi-Monte Carlo for lattice systems

    Get PDF
    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N−1/2N^{-1/2}, where NN is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N−1N^{-1}, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.Comment: on occasion of the 31st International Symposium on Lattice Field Theory - LATTICE 2013, July 29 - August 3, 2013, Mainz, Germany, 7 Pages, 4 figure

    Cheap Newton steps for optimal control problems: automatic differentiation and Pantoja's algorithm

    Get PDF
    Original article can be found at: http://www.informaworld.com/smpp/title~content=t713645924~db=all Copyright Taylor and Francis / Informa.In this paper we discuss Pantoja's construction of the Newton direction for discrete time optimal control problems. We show that automatic differentiation (AD) techniques can be used to calculate the Newton direction accurately, without requiring extensive re-writing of user code, and at a surprisingly low computational cost: for an N-step problem with p control variables and q state variables at each step, the worst case cost is 6(p + q + 1) times the computational cost of a single target function evaluation, independent of N, together with at most p3/3 + p2(q + 1) + 2p(q + 1)2 + (q + l)3, i.e. less than (p + q + l)3, floating point multiply-and-add operations per time step. These costs may be considerably reduced if there is significant structural sparsity in the problem dynamics. The systematic use of checkpointing roughly doubles the operation counts, but reduces the total space cost to the order of 4pN floating point stores. A naive approach to finding the Newton step would require the solution of an Np Np system of equations together with a number of function evaluations proportional to Np, so this approach to Pantoja's construction is extremely attractive, especially if q is very small relative to N. Straightforward modifications of the AD algorithms proposed here can be used to implement other discrete time optimal control solution techniques, such as differential dynamic programming (DDP), which use state-control feedback. The same techniques also can be used to determine with certainty, at the cost of a single Newton direction calculation, whether or not the Hessian of the target function is sufficiently positive definite at a point of interest. This allows computationally cheap post-hoc verification that a second-order minimum has been reached to a given accuracy, regardless of what method has been used to obtain it.Peer reviewe

    Analysis and modification of Newton's method at singularities

    Get PDF
    For systems of nonlinear equations f=0 with singular Jacobian Vf(x*) at some solution x* E F-1(0) the behaviour of Newton's method is analysed. Under certain regularity condition Q-linear convergence is shown to be almost sure from all initial points that are sufficiently c,lose to x*. The possibility of significantly better performance by other nonlienar equation solvers is ruled out. Instead convergence acceleration is achieved by variation of the stepsize or Richardson extrapolation. If the Jacobian Vf of a possibly undetermined system is know to have a nullspace of a certain dimensional a solution of interest, and overdetermined system based on the QR or LU decomposition of Vf is used to obtain superlinear convergence

    Time-lag in Derivative Convergence for Fixed Point Iterations

    Get PDF
    In an earlier study it was proven and experimentally confirmed on a 2D Euler code that fixed point iterations can be differentiated to yield first and second order derivatives of implicit functions that are defined by state equations. It was also asserted that the resulting approximations for reduced gradients and Hessians converge with the same R-factor as the underlying fixed point iteration. A closer look reveals now that nevertheless these derivative values lag behind the functions values in that the ratios of the corresponding errors grow proportional to the iteration counter or its square towards infinity. This rather subtle effect is caused mathematically by the occurrence of nontrivial Jordan blocks associated with degenerate eigenvalues. We elaborate the theory and report its confirmation through numerical experiments
    • …
    corecore