11 research outputs found

    Safe Q-learning for continuous-time linear systems

    Full text link
    Q-learning is a promising method for solving optimal control problems for uncertain systems without the explicit need for system identification. However, approaches for continuous-time Q-learning have limited provable safety guarantees, which restrict their applicability to real-time safety-critical systems. This paper proposes a safe Q-learning algorithm for partially unknown linear time-invariant systems to solve the linear quadratic regulator problem with user-defined state constraints. We frame the safe Q-learning problem as a constrained optimal control problem using reciprocal control barrier functions and show that such an extension provides a safety-assured control policy. To the best of our knowledge, Q-learning for continuous-time systems with state constraints has not yet been reported in the literature

    Output-feedback online optimal control for a class of nonlinear systems

    Full text link
    In this paper an output-feedback model-based reinforcement learning (MBRL) method for a class of second-order nonlinear systems is developed. The control technique uses exact model knowledge and integrates a dynamic state estimator within the model-based reinforcement learning framework to achieve output-feedback MBRL. Simulation results demonstrate the efficacy of the developed method

    Output Feedback Adaptive Optimal Control of Affine Nonlinear systems with a Linear Measurement Model

    Full text link
    Real-world control applications in complex and uncertain environments require adaptability to handle model uncertainties and robustness against disturbances. This paper presents an online, output-feedback, critic-only, model-based reinforcement learning architecture that simultaneously learns and implements an optimal controller while maintaining stability during the learning phase. Using multiplier matrices, a convenient way to search for observer gains is designed along with a controller that learns from simulated experience to ensure stability and convergence of trajectories of the closed-loop system to a neighborhood of the origin. Local uniform ultimate boundedness of the trajectories is established using a Lyapunov-based analysis and demonstrated through simulation results, under mild excitation conditions.Comment: 16 pages, 5 figures, submitted to 2023 IEEE Conference on Control Technology and Application

    Safe Exploration in Model-based Reinforcement Learning using Control Barrier Functions

    Full text link
    This paper develops a model-based reinforcement learning (MBRL) framework for learning online the value function of an infinite-horizon optimal control problem while obeying safety constraints expressed as control barrier functions (CBFs). Our approach is facilitated by the development of a novel class of CBFs, termed Lyapunov-like CBFs (LCBFs), that retain the beneficial properties of CBFs for developing minimally-invasive safe control policies while also possessing desirable Lyapunov-like qualities such as positive semi-definiteness. We show how these LCBFs can be used to augment a learning-based control policy to guarantee safety and then leverage this approach to develop a safe exploration framework in a MBRL setting. We demonstrate that our approach can handle more general safety constraints than comparative methods via numerical examples.Comment: Accepted for publication in Automatic

    Model-based Reinforcement Learning of Nonlinear Dynamical Systems

    Get PDF
    Model-based Reinforcement Learning (MBRL) techniques accelerate the learning task by employing a transition model to make predictions. In this dissertation, we present novel techniques for online learning of unknown dynamics by iteratively computing a feedback controller based on the most recent update of the model. Assuming a structured continuous-time model of the system in terms of a set of bases, we formulate an infinite horizon optimal control problem addressing a given control objective. The structure of the system along with a value function parameterized in the quadratic form provides flexibility in analytically calculating an update rule for the parameters. Hence, a matrix differential equation of the parameters is obtained, where the solution is used to characterize the optimal feedback control in terms of the bases, at any time step. Moreover, the quadratic form of the value function suggests a compact way of updating the parameters that considerably decreases the computational complexity. In the convergence analysis, we demonstrate asymptotic stability and optimality of the obtained learning algorithm around the equilibrium by revealing its connections with the analogous Linear Quadratic Regulator (LQR). Moreover, the results are extended to the trajectory tracking problem. Assuming a structured unknown nonlinear system augmented with the dynamics of a commander system, we obtain a control rule minimizing a given quadratic tracking objective function. Furthermore, in an alternative technique for learning, a piecewise nonlinear affine framework is developed for controlling nonlinear systems with unknown dynamics. Therefore, we extend the results to obtain a general piecewise nonlinear framework where each piece is responsible for locally learning and controlling over some partition of the domain. Then, we consider the Piecewise Affine (PWA) system with a bounded uncertainty as a special case, for which we suggest an optimization-based verification technique. Accordingly, given a discretization of the learned PWA system, we iteratively search for a common piecewise Lyapunov function in a set of positive definite functions, where a non-monotonic convergence is allowed. Then, this Lyapunov candidate is verified for the uncertain system. To demonstrate the applicability of the approaches presented in this dissertation, simulation results on benchmark nonlinear systems are included, such as quadrotor, vehicle, etc. Moreover, as another detailed application, we investigate the Maximum Power Point Tracking (MPPT) problem of solar Photovoltaic (PV) systems. Therefore, we develop an analytical nonlinear optimal control approach that assumes a known model. Then, we apply the obtained nonlinear optimal controller together with the piecewise MBRL technique presented previously
    corecore