1,467 research outputs found

    Data-Driven Integral Reinforcement Learning for Continuous-Time Non-Zero-Sum Games

    Get PDF
    This paper develops an integral value iteration (VI) method to efficiently find online the Nash equilibrium solution of two-player non-zero-sum (NZS) differential games for linear systems with partially unknown dynamics. To guarantee the closed-loop stability about the Nash equilibrium, the explicit upper bound for the discounted factor is given. To show the efficacy of the presented online model-free solution, the integral VI method is compared with the model-based off-line policy iteration method. Moreover, the theoretical analysis of the integral VI algorithm in terms of three aspects, i.e., positive definiteness properties of the updated cost functions, the stability of the closed-loop systems, and the conditions that guarantee the monotone convergence, is provided in detail. Finally, the simulation results demonstrate the efficacy of the presented algorithms

    Neural network optimal control for nonlinear system based on zero-sum differential game

    Get PDF
    summary:In this paper, for a class of the complex nonlinear system control problems, based on the two-person zero-sum game theory, combined with the idea of approximate dynamic programming(ADP), the constrained optimization control problem is solved for the nonlinear systems with unknown system functions and unknown time-varying disturbances. In order to obtain the approximate optimal solution of the zero-sum game, the multilayer neural network is used to fit the evaluation network, the execution network and the disturbance network of ADP respectively. The Lyapunov stability theory is used to prove the uniform convergence, and the system control output converges to the neighborhood of the target reference value. Finally, the simulation example verifies the effectiveness of the algorithm

    Cooperative Strategies for Management of Power Quality Problems in Voltage-Source Converter-based Microgrids

    Get PDF
    The development of cooperative control strategies for microgrids has become an area of increasing research interest in recent years, often a result of advances in other areas of control theory such as multi-agent systems and enabled by emerging wireless communications technology, machine learning techniques, and power electronics. While some possible applications of the cooperative control theory to microgrids have been described in the research literature, a comprehensive survey of this approach with respect to its limitations and wide-ranging potential applications has not yet been provided. In this regard, an important area of research into microgrids is developing intelligent cooperative operating strategies within and between microgrids which implement and allocate tasks at the local level, and do not rely on centralized command and control structures. Multi-agent techniques are one focus of this research, but have not been applied to the full range of power quality problems in microgrids. The ability for microgrid control systems to manage harmonics, unbalance, flicker, and black start capability are some examples of applications yet to be fully exploited. During islanded operation, the normal buffer against disturbances and power imbalances provided by the main grid coupling is removed, this together with the reduced inertia of the microgrid (MG), makes power quality (PQ) management a critical control function. This research will investigate new cooperative control techniques for solving power quality problems in voltage source converter (VSC)-based AC microgrids. A set of specific power quality problems have been selected for the application focus, based on a survey of relevant published literature, international standards, and electricity utility regulations. The control problems which will be addressed are voltage regulation, unbalance load sharing, and flicker mitigation. The thesis introduces novel approaches based on multi-agent consensus problems and differential games. It was decided to exclude the management of harmonics, which is a more challenging issue, and is the focus of future research. Rather than using model-based engineering design for optimization of controller parameters, the thesis describes a novel technique for controller synthesis using off-policy reinforcement learning. The thesis also addresses the topic of communication and control system co-design. In this regard, stability of secondary voltage control considering communication time-delays will be addressed, while a performance-oriented approach to rate allocation using a novel solution method is described based on convex optimization

    Multi-H∞ controls for unknown input-interference nonlinear system with reinforcement learning

    Get PDF
    This article studies the multi-H∞ controls for the input-interference nonlinear systems via adaptive dynamic programming (ADP) method, which allows for multiple inputs to have the individual selfish component of the strategy to resist weighted interference. In this line, the ADP scheme is used to learn the Nash-optimization solutions of the input-interference nonlinear system such that multiple H∞ performance indices can reach the defined Nash equilibrium. First, the input-interference nonlinear system is given and the Nash equilibrium is defined. An adaptive neural network (NN) observer is introduced to identify the input-interference nonlinear dynamics. Then, the critic NNs are used to learn the multiple H∞ performance indices. A novel adaptive law is designed to update the critic NN weights by minimizing the Hamiltonian-Jacobi-Isaacs (HJI) equation, which can be used to directly calculate the multi-H∞ controls effectively by using input-output data such that the actor structure is avoided. Moreover, the control system stability and updated parameter convergence are proved. Finally, two numerical examples are simulated to verify the proposed ADP scheme for the input-interference nonlinear system

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Event-triggered robust control for multi-player nonzero-sum games with input constraints and mismatched uncertainties

    Get PDF
    In this article, an event-triggered robust control (ETRC) method is investigated for multi-player nonzero-sum games of continuous-time input constrained nonlinear systems with mismatched uncertainties. By constructing an auxiliary system and designing an appropriate value function, the robust control problem of input constrained nonlinear systems is transformed into an optimal regulation problem. Then, a critic neural network (NN) is adopted to approximate the value function of each player for solving the event-triggered coupled Hamilton-Jacobi equation and obtaining control laws. Based on a designed event-triggering condition, control laws are updated when events occur only. Thus, both computational burden and communication bandwidth are reduced. We prove that the weight approximation errors of critic NNs and the closed-loop uncertain multi-player system states are all uniformly ultimately bounded thanks to the Lyapunov's direct method. Finally, two examples are provided to demonstrate the effectiveness of the developed ETRC method

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach
    • …
    corecore