8,837 research outputs found

    Robust learning with implicit residual networks

    Full text link
    In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power and allows to control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique and provide numerical results in support of our findings

    Learning-based Predictive Control for Nonlinear Systems with Unknown Dynamics Subject to Safety Constraints

    Full text link
    Model predictive control (MPC) has been widely employed as an effective method for model-based constrained control. For systems with unknown dynamics, reinforcement learning (RL) and adaptive dynamic programming (ADP) have received notable attention to solve the adaptive optimal control problems. Recently, works on the use of RL in the framework of MPC have emerged, which can enhance the ability of MPC for data-driven control. However, the safety under state constraints and the closed-loop robustness are difficult to be verified due to approximation errors of RL with function approximation structures. Aiming at the above problem, we propose a data-driven robust MPC solution based on incremental RL, called data-driven robust learning-based predictive control (dr-LPC), for perturbed unknown nonlinear systems subject to safety constraints. A data-driven robust MPC (dr-MPC) is firstly formulated with a learned predictor. The incremental Dual Heuristic Programming (DHP) algorithm using an actor-critic architecture is then utilized to solve the online optimization problem of dr-MPC. In each prediction horizon, the actor and critic learn time-varying laws for approximating the optimal control policy and costate respectively, which is different from classical MPCs. The state and control constraints are enforced in the learning process via building a Hamilton-Jacobi-Bellman (HJB) equation and a regularized actor-critic learning structure using logarithmic barrier functions. The closed-loop robustness and safety of the dr-LPC are proven under function approximation errors. Simulation results on two control examples have been reported, which show that the dr-LPC can outperform the DHP and dr-MPC in terms of state regulation, and its average computational time is much smaller than that with the dr-MPC in both examples.Comment: The paper has been submitted at a IEEE Journal for possible publicatio

    Approximate dynamic programming based solutions for fixed-final-time optimal control and optimal switching

    Get PDF
    Optimal solutions with neural networks (NN) based on an approximate dynamic programming (ADP) framework for new classes of engineering and non-engineering problems and associated difficulties and challenges are investigated in this dissertation. In the enclosed eight papers, the ADP framework is utilized for solving fixed-final-time problems (also called terminal control problems) and problems with switching nature. An ADP based algorithm is proposed in Paper 1 for solving fixed-final-time problems with soft terminal constraint, in which, a single neural network with a single set of weights is utilized. Paper 2 investigates fixed-final-time problems with hard terminal constraints. The optimality analysis of the ADP based algorithm for fixed-final-time problems is the subject of Paper 3, in which, it is shown that the proposed algorithm leads to the global optimal solution providing certain conditions hold. Afterwards, the developments in Papers 1 to 3 are used to tackle a more challenging class of problems, namely, optimal control of switching systems. This class of problems is divided into problems with fixed mode sequence (Papers 4 and 5) and problems with free mode sequence (Papers 6 and 7). Each of these two classes is further divided into problems with autonomous subsystems (Papers 4 and 6) and problems with controlled subsystems (Papers 5 and 7). Different ADP-based algorithms are developed and proofs of convergence of the proposed iterative algorithms are presented. Moreover, an extension to the developments is provided for online learning of the optimal switching solution for problems with modeling uncertainty in Paper 8. Each of the theoretical developments is numerically analyzed using different real-world or benchmark problems --Abstract, page v
    • …
    corecore