15 research outputs found

    Safety-critical Policy Iteration Algorithm for Control under Model Uncertainty

    Get PDF
    Safety is an important aim in designing safe-critical systems. To design such systems, many policy iterative algorithms are introduced to find safe optimal controllers. Due to the fact that in most practical systems, finding accurate information from the system is rather impossible, a new online training method is presented in this paper to perform an iterative reinforcement learning based algorithm using real data instead of identifying system dynamics. Also, in this paper the impact of model uncertainty is examined on control Lyapunov functions (CLF) and control barrier functions (CBF) dynamic limitations. The Sum of Square program is used to iteratively find an optimal safe control solution. The simulation results which are applied on a quarter car model show the efficiency of the proposed method in the fields of optimality and robustness

    Output-feedback online optimal control for a class of nonlinear systems

    Full text link
    In this paper an output-feedback model-based reinforcement learning (MBRL) method for a class of second-order nonlinear systems is developed. The control technique uses exact model knowledge and integrates a dynamic state estimator within the model-based reinforcement learning framework to achieve output-feedback MBRL. Simulation results demonstrate the efficacy of the developed method

    Neural-network based online policy iteration for continuous-time infinite-horizon optimal control of nonlinear systems

    Get PDF
    IEEE Catalog Number: CFP15SIP-USBA new policy-iteration algorithm based on neural networks (NNs) is proposed in this paper to synthesize optimal control laws online for continuous-time nonlinear systems. Latest advances in this field have enabled synchronous policy iteration but require an additional tuning loop or a logic switch mechanism to maintain system stability. A new algorithm is thus derived in this paper to address this limitation. The optimal control law is found by solving the Hamilton-Jacobi- Bellman (HJB) equation for the associated value function via synchronous policy iteration in a critic-actor configuration. As a major contribution, a new form of NN approximation for the value function is proposed, offering the closed-loop system asymptotic stability without additional tuning scheme or logic switch mechanism. As a second contribution, an extended Kalman filter is introduced to estimate the critic NN parameters for fast convergence. The efficacy of the new algorithm is verified by simulations.Difan Tang, Lei Chen, and Zhao Feng Tia

    Online Adaptive Optimal Control of Vehicle Active Suspension Systems Using Single-Network Approximate Dynamic Programming

    Get PDF
    In view of the performance requirements (e.g., ride comfort, road holding, and suspension space limitation) for vehicle suspension systems, this paper proposes an adaptive optimal control method for quarter-car active suspension system by using the approximate dynamic programming approach (ADP). Online optimal control law is obtained by using a single adaptive critic NN to approximate the solution of the Hamilton-Jacobi-Bellman (HJB) equation. Stability of the closed-loop system is proved by Lyapunov theory. Compared with the classic linear quadratic regulator (LQR) approach, the proposed ADP-based adaptive optimal control method demonstrates improved performance in the presence of parametric uncertainties (e.g., sprung mass) and unknown road displacement. Numerical simulation results of a sedan suspension system are presented to verify the effectiveness of the proposed control strategy
    corecore