571 research outputs found
Optimal Tracking Of Nonlinear Discrete-time Systems Using Zero-Sum Game Formulation And Hybrid Learning
This paper presents a novel hybrid learning-based optimal tracking method to address zero-sum game problems for partially uncertain nonlinear discrete-time systems. An augmented system and its associated discounted cost function are defined to address optimal tracking. Three multi-layer neural networks (NNs) are utilized to approximate the optimal control and the worst-case disturbance inputs, and the value function. The critic weights are tuned using the hybrid technique, whose weights are updated once at the sampling instants and in an iterative manner over finite times within the sampling instants. The proposed hybrid technique helps accelerate the convergence of the approximated value functional to its actual value, which makes the optimal policy attain quicker. A two-layer NN-based actor generates the optimal control input, and its weights are adjusted based on control input errors. Moreover, the concurrent learning method is utilized to ease the requirement of persistent excitation. Further, the Lyapunov method investigates the stability of the closed-loop system. Finally, the proposed method is evaluated on a two-link robot arm and demonstrates promising results
Event-triggered near optimal adaptive control of interconnected systems
Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner.
First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced.
Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv
Recommended from our members
Game-Theoretic Safety Assurance for Human-Centered Robotic Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must have the ability to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads makes it unviable to rely on common design-time assumptions, since these may be violated once the system is deployed. Instead, the next generation of robotic technologies will need to reason about safety online, constructing high-confidence assurances informed by ongoing observations of the environment and other agents, in spite of models of them being necessarily fallible.This dissertation aims to lay down the necessary foundations to enable autonomous systems to ensure their own safety in complex, changing, and uncertain environments, by explicitly reasoning about the gap between their models and the real world. It first introduces a suite of novel robust optimal control formulations and algorithmic tools that permit tractable safety analysis in time-varying, multi-agent systems, as well as safe real-time robotic navigation in partially unknown environments; these approaches are demonstrated on large-scale unmanned air traffic simulation and physical quadrotor platforms. After this, it draws on Bayesian machine learning methods to translate model-based guarantees into high-confidence assurances, monitoring the reliability of predictive models in light of changing evidence about the physical system and surrounding agents. This principle is first applied to a general safety framework allowing the use of learning-based control (e.g. reinforcement learning) for safety-critical robotic systems such as drones, and then combined with insights from cognitive science and dynamic game theory to enable safe human-centered navigation and interaction; these techniques are showcased on physical quadrotors—flying in unmodeled wind and among human pedestrians—and simulated highway driving. The dissertation ends with a discussion of challenges and opportunities ahead, including the bridging of safety analysis and reinforcement learning and the need to ``close the loop'' around learning and adaptation in order to deploy increasingly advanced autonomous systems with confidence
Optimal adaptive control of time-delay dynamical systems with known and uncertain dynamics
Delays are found in many industrial pneumatic and hydraulic systems, and as a result, the performance of the overall closed-loop system deteriorates unless they are explicitly accounted. It is also possible that the dynamics of such systems are uncertain. On the other hand, optimal control of time-delay systems in the presence of known and uncertain dynamics by using state and output feedback is of paramount importance. Therefore, in this research, a suite of novel optimal adaptive control (OAC) techniques are undertaken for linear and nonlinear continuous time-delay systems in the presence of uncertain system dynamics using state and/or output feedback.
First, the optimal regulation of linear continuous-time systems with state and input delays by utilizing a quadratic cost function over infinite horizon is addressed using state and output feedback. Next, the optimal adaptive regulation is extended to uncertain linear continuous-time systems under a mild assumption that the bounds on system matrices are known. Subsequently, the event-triggered optimal adaptive regulation of partially unknown linear continuous time systems with state-delay is addressed by using integral reinforcement learning (IRL). It is demonstrated that the optimal control policy renders asymptotic stability of the closed-loop system provided the linear time-delayed system is controllable and observable. The proposed event-triggered approach relaxed the need for continuous availability of state vector and proven to be zeno-free.
Finally, the OAC using IRL neural network based control of uncertain nonlinear time-delay systems with input and state delays is investigated. An identifier is proposed for nonlinear time-delay systems to approximate the system dynamics and relax the need for the control coefficient matrix in generating the control policy. Lyapunov analysis is utilized to design the optimal adaptive controller, derive parameter/weight tuning law and verify stability of the closed-loop system”--Abstract, page iv
- …