29,717 research outputs found

    Adaptive set-point regulation of discrete-time nonlinear systems

    Full text link
    In this paper, adaptive set-point regulation controllers for discrete-time nonlinear systems are constructed. The system to be controlled is assumed to have a parametric uncertainty, and an excitation signal is used in order to obtain the parameter estimate. The proposed controller belongs to the category of indirect adaptive controllers, and its construction is based on the policy of calculating the control input rather than that of obtaining a control law. The proposed method solves the adaptive set-point regulation problem under the (possibly minimal) assumption that the target state is reachable provided that the parameter is known. Additional feature of the proposed method is that Lyapunov-like functions have not been used in the construction of the controllers.Comment: This work was supported by the Japan Society for the Promotion of Science under Grant-in-Aid for Scientific Research (C) 23560535. This manuscript is a former version of the manuscript the author has submitted to International Journal of Adaptive Control and Signal Processin

    Data-based approximate policy iteration for nonlinear continuous-time optimal control design

    Full text link
    This paper addresses the model-free nonlinear optimal problem with generalized cost functional, and a data-based reinforcement learning technique is developed. It is known that the nonlinear optimal control problem relies on the solution of the Hamilton-Jacobi-Bellman (HJB) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, most of practical systems are too complicated to establish their accurate mathematical model. To overcome these difficulties, we propose a data-based approximate policy iteration (API) method by using real system data rather than system model. Firstly, a model-free policy iteration algorithm is derived for constrained optimal control problem and its convergence is proved, which can learn the solution of HJB equation and optimal control policy without requiring any knowledge of system mathematical model. The implementation of the algorithm is based on the thought of actor-critic structure, where actor and critic neural networks (NNs) are employed to approximate the control policy and cost function, respectively. To update the weights of actor and critic NNs, a least-square approach is developed based on the method of weighted residuals. The whole data-based API method includes two parts, where the first part is implemented online to collect real system information, and the second part is conducting offline policy iteration to learn the solution of HJB equation and the control policy. Then, the data-based API algorithm is simplified for solving unconstrained optimal control problem of nonlinear and linear systems. Finally, we test the efficiency of the data-based API control design method on a simple nonlinear system, and further apply it to a rotational/translational actuator system. The simulation results demonstrate the effectiveness of the proposed method.Comment: 22 pages, 21 figures, submitted for Peer Revie

    Full-Form Model-Free Adaptive Control for a Family of Multivariable System

    Full text link
    This correspondence proposes a kind of model-free adaptive control (MFAC) on the basis of full-form equivalent-dynamic-linearization model (EDLM) for the multivariable nonlinear system. Compared with the current MFAC, i) this control law does not have denominator, which is stemmed from the norm of the inverse matrix and it inevitably misses the coupling relationships among the inputs and outputs (I/O) of systems. ii) the current restrictive assumption of a diagonally dominant matrix is reduced to extend its application. iii) the MFAC based on full-form EDLM is more general than the current MFAC based on partial-form and compact-form EDLM. At last, the convergence of tracking error and the BIBO stability of controlled system have been proved, which is one of the open questions in MFAC

    Secondary Voltage Control of Microgrids Using Nonlinear Multiple Models Adaptive Control

    Full text link
    This paper proposes a novel model-free secondary voltage control (SVC) for microgrids using nonlinear multiple models adaptive control. The proposed method is comprised of two components. Firstly, a linear robust adaptive controller is designed to guarantee the voltage stability in the bounded-input-bounded-output (BIBO) manner, which is more consistent with the operation requirements of microgrids. Secondly, a nonlinear adaptive controller is developed to improve the voltage tracking performance with the help of artificial neural networks (ANNs). A switching mechanism is proposed to coordinate such two controllers for guaranteeing the closed-loop stability while achieving accurate voltage tracking. Given our method leverages a data-driven real-time identification, it only relies on the input and output data of microgrids without resorting to any prior information of primary control and grid models, thus exhibiting good robustness, ease of deployment and disturbance rejection

    Continuous-Time Robust Dynamic Programming

    Full text link
    This paper presents a new theory, known as robust dynamic pro- gramming, for a class of continuous-time dynamical systems. Different from traditional dynamic programming (DP) methods, this new theory serves as a fundamental tool to analyze the robustness of DP algorithms, and in par- ticular, to develop novel adaptive optimal control and reinforcement learning methods. In order to demonstrate the potential of this new framework, four illustrative applications in the fields of stochastic optimal control and adaptive DP are presented. Three numerical examples arising from both finance and engineering industries are also given, along with several possible extensions of the proposed framework

    Robust Policy Iteration for Continuous-time Linear Quadratic Regulation

    Full text link
    This paper studies the robustness of policy iteration in the context of continuous-time infinite-horizon linear quadratic regulation (LQR) problem. It is shown that Kleinman's policy iteration algorithm is inherently robust to small disturbances and enjoys local input-to-state stability in the sense of Sontag. More precisely, whenever the disturbance-induced input term in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded and enter a small neighborhood of the optimal solution of the LQR problem. Based on this result, an off-policy data-driven policy iteration algorithm for the LQR problem is shown to be robust when the system dynamics are subjected to small additive unknown bounded disturbances. The theoretical results are validated by a numerical example

    A Separation-based Approach to Data-based Control for Large-Scale Partially Observed Systems

    Full text link
    This paper studies the partially observed stochastic optimal control problem for systems with state dynamics governed by partial differential equations (PDEs) that leads to an extremely large problem. First, an open-loop deterministic trajectory optimization problem is solved using a black-box simulation model of the dynamical system. Next, a Linear Quadratic Gaussian (LQG) controller is designed for the nominal trajectory-dependent linearized system which is identified using input-output experimental data consisting of the impulse responses of the optimized nominal system. A computational nonlinear heat example is used to illustrate the performance of the proposed approach.Comment: arXiv admin note: text overlap with arXiv:1705.09761, arXiv:1707.0309

    Verification for Machine Learning, Autonomy, and Neural Networks Survey

    Full text link
    This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components (LECs) that accomplish tasks from classification to control. Recently, the formal methods and formal verification community has developed methods to characterize behaviors in these LECs with eventual goals of formally verifying specifications for LECs, and this article presents a survey of many of these recent approaches

    Value and Policy Iteration in Optimal Control and Adaptive Dynamic Programming

    Full text link
    In this paper, we consider discrete-time infinite horizon problems of optimal control to a terminal set of states. These are the problems that are often taken as the starting point for adaptive dynamic programming. Under very general assumptions, we establish the uniqueness of solution of Bellman's equation, and we provide convergence results for value and policy iteration

    A Decoupled Data Based Approach to Stochastic Optimal Control Problems

    Full text link
    This paper studies the stochastic optimal control problem for systems with unknown dynamics. A novel decoupled data based control (D2C) approach is proposed, which solves the problem in a decoupled "open loop-closed loop" fashion that is shown to be near-optimal. First, an open-loop deterministic trajectory optimization problem is solved using a black-box simulation model of the dynamical system using a standard nonlinear programming (NLP) solver. Then a Linear Quadratic Regulator (LQR) controller is designed for the nominal trajectory-dependent linearized system which is learned using input-output experimental data. Computational examples are used to illustrate the performance of the proposed approach with three benchmark problems.Comment: arXiv admin note: substantial text overlap with arXiv:1711.01167, arXiv:1705.0976
    • …
    corecore