4,941 research outputs found

    Neural network optimal control for nonlinear system based on zero-sum differential game

    Get PDF
    summary:In this paper, for a class of the complex nonlinear system control problems, based on the two-person zero-sum game theory, combined with the idea of approximate dynamic programming(ADP), the constrained optimization control problem is solved for the nonlinear systems with unknown system functions and unknown time-varying disturbances. In order to obtain the approximate optimal solution of the zero-sum game, the multilayer neural network is used to fit the evaluation network, the execution network and the disturbance network of ADP respectively. The Lyapunov stability theory is used to prove the uniform convergence, and the system control output converges to the neighborhood of the target reference value. Finally, the simulation example verifies the effectiveness of the algorithm

    Minimax Iterative Dynamic Game: Application to Nonlinear Robot Control Tasks

    Full text link
    Multistage decision policies provide useful control strategies in high-dimensional state spaces, particularly in complex control tasks. However, they exhibit weak performance guarantees in the presence of disturbance, model mismatch, or model uncertainties. This brittleness limits their use in high-risk scenarios. We present how to quantify the sensitivity of such policies in order to inform of their robustness capacity. We also propose a minimax iterative dynamic game framework for designing robust policies in the presence of disturbance/uncertainties. We test the quantification hypothesis on a carefully designed deep neural network policy; we then pose a minimax iterative dynamic game (iDG) framework for improving policy robustness in the presence of adversarial disturbances. We evaluate our iDG framework on a mecanum-wheeled robot, whose goal is to find a ocally robust optimal multistage policy that achieve a given goal-reaching task. The algorithm is simple and adaptable for designing meta-learning/deep policies that are robust against disturbances, model mismatch, or model uncertainties, up to a disturbance bound. Videos of the results are on the author's website, http://ecs.utdallas.edu/~opo140030/iros18/iros2018.html, while the codes for reproducing our experiments are on github, https://github.com/lakehanne/youbot/tree/rilqg. A self-contained environment for reproducing our results is on docker, https://hub.docker.com/r/lakehanne/youbotbuntu14/Comment: 2018 International Conference on Intelligent Robots and System

    Issues on Stability of ADP Feedback Controllers for Dynamical Systems

    Get PDF
    This paper traces the development of neural-network (NN)-based feedback controllers that are derived from the principle of adaptive/approximate dynamic programming (ADP) and discusses their closed-loop stability. Different versions of NN structures in the literature, which embed mathematical mappings related to solutions of the ADP-formulated problems called “adaptive critics” or “action-critic” networks, are discussed. Distinction between the two classes of ADP applications is pointed out. Furthermore, papers in “model-free” development and model-based neurocontrollers are reviewed in terms of their contributions to stability issues. Recent literature suggests that work in ADP-based feedback controllers with assured stability is growing in diverse forms
    corecore