2,303 research outputs found

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    Explainable Intelligent Fault Diagnosis for Nonlinear Dynamic Systems: From Unsupervised to Supervised Learning

    Get PDF
    The increased complexity and intelligence of automation systems require the development of intelligent fault diagnosis (IFD) methodologies. By relying on the concept of a suspected space, this study develops explainable data-driven IFD approaches for nonlinear dynamic systems. More specifically, we parameterize nonlinear systems through a generalized kernel representation for system modeling and the associated fault diagnosis. An important result obtained is a unified form of kernel representations, applicable to both unsupervised and supervised learning. More importantly, through a rigorous theoretical analysis, we discover the existence of a bridge (i.e., a bijective mapping) between some supervised and unsupervised learning-based entities. Notably, the designed IFD approaches achieve the same performance with the use of this bridge. In order to have a better understanding of the results obtained, both unsupervised and supervised neural networks are chosen as the learning tools to identify the generalized kernel representations and design the IFD schemes; an invertible neural network is then employed to build the bridge between them. This article is a perspective article, whose contribution lies in proposing and formalizing the fundamental concepts for explainable intelligent learning methods, contributing to system modeling and data-driven IFD designs for nonlinear dynamic systems

    Concurrent learning-based approximate optimal regulation

    Full text link
    In deterministic systems, reinforcement learning-based online approximate optimal control methods typically require a restrictive persistence of excitation (PE) condition for convergence. This paper presents a concurrent learning-based solution to the online approximate optimal regulation problem that eliminates the need for PE. The development is based on the observation that given a model of the system, the Bellman error, which quantifies the deviation of the system Hamiltonian from the optimal Hamiltonian, can be evaluated at any point in the state space. Further, a concurrent learning-based parameter identifier is developed to compensate for parametric uncertainty in the plant dynamics. Uniformly ultimately bounded (UUB) convergence of the system states to the origin, and UUB convergence of the developed policy to the optimal policy are established using a Lyapunov-based analysis, and simulations are performed to demonstrate the performance of the developed controller
    • …
    corecore