1 research outputs found

    Newton's Method and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games

    Full text link
    Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend a recursive Newton's algorithm and the popular differential dynamic programming (DDP) for single-agent optimal control to the case of full-information non-zero sum dynamic games. In the single-agent case, the convergence of DDP is proved by comparison with Newton's method, which converges locally at a quadratic rate. We show that the iterates of Newton's method and DDP are sufficiently close for the DDP to inherit the quadratic convergence rate of Newton's method. We also prove both methods result in an open-loop Nash equilibrium and a local feedback O(ϵ2)O(\epsilon^2)-Nash equilibrium. Numerical examples are provided.Comment: 19 pages. The shortened version was accepted at CDC 2019. arXiv admin note: substantial text overlap with arXiv:1809.0830
    corecore