4,361 research outputs found

    A brief review of neural networks based learning and control and their applications for robots

    Get PDF
    As an imitation of the biological nervous systems, neural networks (NN), which are characterized with powerful learning ability, have been employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification and patterns recognition etc. This article aims to bring a brief review of the state-of-art NN for the complex nonlinear systems. Recent progresses of NNs in both theoretical developments and practical applications are investigated and surveyed. Specifically, NN based robot learning and control applications were further reviewed, including NN based robot manipulator control, NN based human robot interaction and NN based behavior recognition and generation

    Optimal tracking control for uncertain nonlinear systems with prescribed performance via critic-only ADP

    Get PDF
    This paper addresses the tracking control problem for a class of nonlinear systems described by Euler-Lagrange equations with uncertain system parameters. The proposed control scheme is capable of guaranteeing prescribed performance from two aspects: 1) A special parameter estimator with prescribed performance properties is embedded in the control scheme. The estimator not only ensures the exponential convergence of the estimation errors under relaxed excitation conditions but also can restrict all estimates to pre-determined bounds during the whole estimation process; 2) The proposed controller can strictly guarantee the user-defined performance specifications on tracking errors, including convergence rate, maximum overshoot, and residual set. More importantly, it has the optimizing ability for the trade-off between performance and control cost. A state transformation method is employed to transform the constrained optimal tracking control problem to an unconstrained stationary optimal problem. Then a critic-only adaptive dynamic programming algorithm is designed to approximate the solution of the Hamilton-Jacobi-Bellman equation and the corresponding optimal control policy. Uniformly ultimately bounded stability is guaranteed via Lyapunov-based stability analysis. Finally, numerical simulation results demonstrate the effectiveness of the proposed control scheme

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page

    Reinforcement Learning, Intelligent Control and their Applications in Connected and Autonomous Vehicles

    Get PDF
    Reinforcement learning (RL) has attracted large attention over the past few years. Recently, we developed a data-driven algorithm to solve predictive cruise control (PCC) and games output regulation problems. This work integrates our recent contributions to the application of RL in game theory, output regulation problems, robust control, small-gain theory and PCC. The algorithm was developed for HH_\infty adaptive optimal output regulation of uncertain linear systems, and uncertain partially linear systems to reject disturbance and also force the output of the systems to asymptotically track a reference. In the PCC problem, we determined the reference velocity for each autonomous vehicle in the platoon using the traffic information broadcasted from the lights to reduce the vehicles\u27 trip time. Then we employed the algorithm to design an approximate optimal controller for the vehicles. This controller is able to regulate the headway, velocity and acceleration of each vehicle to the desired values. Simulation results validate the effectiveness of the algorithms

    Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning

    Get PDF
    Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach

    Event-triggered robust control for multi-player nonzero-sum games with input constraints and mismatched uncertainties

    Get PDF
    In this article, an event-triggered robust control (ETRC) method is investigated for multi-player nonzero-sum games of continuous-time input constrained nonlinear systems with mismatched uncertainties. By constructing an auxiliary system and designing an appropriate value function, the robust control problem of input constrained nonlinear systems is transformed into an optimal regulation problem. Then, a critic neural network (NN) is adopted to approximate the value function of each player for solving the event-triggered coupled Hamilton-Jacobi equation and obtaining control laws. Based on a designed event-triggering condition, control laws are updated when events occur only. Thus, both computational burden and communication bandwidth are reduced. We prove that the weight approximation errors of critic NNs and the closed-loop uncertain multi-player system states are all uniformly ultimately bounded thanks to the Lyapunov's direct method. Finally, two examples are provided to demonstrate the effectiveness of the developed ETRC method

    Analysis, filtering, and control for Takagi-Sugeno fuzzy models in networked systems

    Get PDF
    Copyright © 2015 Sunjie Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.The fuzzy logic theory has been proven to be effective in dealing with various nonlinear systems and has a great success in industry applications. Among different kinds of models for fuzzy systems, the so-called Takagi-Sugeno (T-S) fuzzy model has been quite popular due to its convenient and simple dynamic structure as well as its capability of approximating any smooth nonlinear function to any specified accuracy within any compact set. In terms of such a model, the performance analysis and the design of controllers and filters play important roles in the research of fuzzy systems. In this paper, we aim to survey some recent advances on the T-S fuzzy control and filtering problems with various network-induced phenomena. The network-induced phenomena under consideration mainly include communication delays, packet dropouts, signal quantization, and randomly occurring uncertainties (ROUs). With such network-induced phenomena, the developments on T-S fuzzy control and filtering issues are reviewed in detail. In addition, some latest results on this topic are highlighted. In the end, conclusions are drawn and some possible future research directions are pointed out.This work was supported in part by the National Natural Science Foundation of China under Grants 61134009, 61329301, 11301118 and 61174136, the Natural Science Foundation of Jiangsu Province of China under Grant BK20130017, the Fundamental Research Funds for the Central Universities of China under Grant CUSF-DH-D-2013061, the Royal Society of the U.K., and the Alexander von Humboldt Foundation of Germany
    corecore