3,846 research outputs found

    Episodic Learning with Control Lyapunov Functions for Uncertain Robotic Systems

    Get PDF
    Many modern nonlinear control methods aim to endow systems with guaranteed properties, such as stability or safety, and have been successfully applied to the domain of robotics. However, model uncertainty remains a persistent challenge, weakening theoretical guarantees and causing implementation failures on physical systems. This paper develops a machine learning framework centered around Control Lyapunov Functions (CLFs) to adapt to parametric uncertainty and unmodeled dynamics in general robotic systems. Our proposed method proceeds by iteratively updating estimates of Lyapunov function derivatives and improving controllers, ultimately yielding a stabilizing quadratic program model-based controller. We validate our approach on a planar Segway simulation, demonstrating substantial performance improvements by iteratively refining on a base model-free controller

    Issues on Stability of ADP Feedback Controllers for Dynamical Systems

    Get PDF
    This paper traces the development of neural-network (NN)-based feedback controllers that are derived from the principle of adaptive/approximate dynamic programming (ADP) and discusses their closed-loop stability. Different versions of NN structures in the literature, which embed mathematical mappings related to solutions of the ADP-formulated problems called “adaptive critics” or “action-critic” networks, are discussed. Distinction between the two classes of ADP applications is pointed out. Furthermore, papers in “model-free” development and model-based neurocontrollers are reviewed in terms of their contributions to stability issues. Recent literature suggests that work in ADP-based feedback controllers with assured stability is growing in diverse forms

    Reinforcement Learning Neural-Network-Based Controller for Nonlinear Discrete-Time Systems with Input Constraints

    Get PDF
    A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the strategic utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysi

    Decentralized Optimal Control With Application In Power System

    Get PDF
    An output-feedback decentralized optimal controller is proposed for power systems with renewable energy penetration. Renewable energy source is modeled similar to the classical generator model and is equipped with the unified power flow controller (UPFC). The transient performance of power system is considered and stability of the dynamical states are investigated. An offline decentralized optimal controller is designed that utilizes only the local states. The network comprises conventional synchronous generators as well as renewable sources with inverter equipped with UPFC. Subsequently, the optimal decentralized controller is compared to the initial stabilizing controller used to obtain the optimal controller. An online decentralized optimal controller is designed for discrete-time system. Two neuro networks are utilized to estimate value function and optimal control strategy. Furthermore, a novel observer-based decentralized optimal controller is developed on small scale discrete-time power system. The system is trained followed by least square rules and successive approximation. Simulation results on IEEE 14-, 30-, and 118-bus power system benchmarks shows satisfactory performance of the online decentralized controller. And also, simulation results demonstrate great performance of the observer and the optimal controller compare to the centralized optimal controller

    Reinforcement Learning, Intelligent Control and their Applications in Connected and Autonomous Vehicles

    Get PDF
    Reinforcement learning (RL) has attracted large attention over the past few years. Recently, we developed a data-driven algorithm to solve predictive cruise control (PCC) and games output regulation problems. This work integrates our recent contributions to the application of RL in game theory, output regulation problems, robust control, small-gain theory and PCC. The algorithm was developed for H∞H_\infty adaptive optimal output regulation of uncertain linear systems, and uncertain partially linear systems to reject disturbance and also force the output of the systems to asymptotically track a reference. In the PCC problem, we determined the reference velocity for each autonomous vehicle in the platoon using the traffic information broadcasted from the lights to reduce the vehicles\u27 trip time. Then we employed the algorithm to design an approximate optimal controller for the vehicles. This controller is able to regulate the headway, velocity and acceleration of each vehicle to the desired values. Simulation results validate the effectiveness of the algorithms
    • 

    corecore