110 research outputs found

    Robust Stability of Neural-Network Controlled Nonlinear Systems with Parametric Variability

    Full text link
    Stability certification and identification of the stabilizable operating region of a system are two important concerns to ensure its operational safety/security and robustness. With the advent of machine-learning tools, these issues are specially important for systems with machine-learned components in the feedback loop. Here we develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems, where the equilibria can drift when parametric changes occur. A Lyapunov based convex stability certificate is developed and is further used to devise an estimate for a local Lipschitz upper bound for a neural-network (NN) controller and a corresponding operating domain on the state space, containing an initialization set from where the closed-loop (CL) local asymptotic stability of each system in the class is guaranteed under the same controller, while the system trajectories remain confined to the operating domain. For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) algorithm is also proposed. The effectiveness of the proposed framework is demonstrated using illustrative examples.Comment: 15 pages, 7 figure

    Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability

    Full text link
    A trustworthy reinforcement learning algorithm should be competent in solving challenging real-world problems, including {robustly} handling uncertainties, satisfying {safety} constraints to avoid catastrophic failures, and {generalizing} to unseen scenarios during deployments. This study aims to overview these main perspectives of trustworthy reinforcement learning considering its intrinsic vulnerabilities on robustness, safety, and generalizability. In particular, we give rigorous formulations, categorize corresponding methodologies, and discuss benchmarks for each perspective. Moreover, we provide an outlook section to spur promising future directions with a brief discussion on extrinsic vulnerabilities considering human feedback. We hope this survey could bring together separate threads of studies together in a unified framework and promote the trustworthiness of reinforcement learning.Comment: 36 pages, 5 figure

    Online Optimization of Dynamical Systems with Deep Learning Perception

    Full text link
    This paper considers the problem of controlling a dynamical system when the state cannot be directly measured and the control performance metrics are unknown or partially known. In particular, we focus on the design of data-driven controllers to regulate a dynamical system to the solution of a constrained convex optimization problem where: i) the state must be estimated from nonlinear and possibly high-dimensional data; and, ii) the cost of the optimization problem -- which models control objectives associated with inputs and states of the system -- is not available and must be learned from data. We propose a data-driven feedback controller that is based on adaptations of a projected gradient-flow method; the controller includes neural networks as integral components for the estimation of the unknown functions. Leveraging stability theory for perturbed systems, we derive sufficient conditions to guarantee exponential input-to-state stability (ISS) of the control loop. In particular, we show that the interconnected system is ISS with respect to the approximation errors of the neural network and unknown disturbances affecting the system. The transient bounds combine the universal approximation property of deep neural networks with the ISS characterization. Illustrative numerical results are presented in the context of control of robotics and epidemics.Comment: This is an extended version of the paper submitted to the IEEE Open Journal of Control Systems - Special Section on Machine Learning with Control, containing proof
    corecore