1,578 research outputs found

    On general systems with network-enhanced complexities

    Get PDF
    In recent years, the study of networked control systems (NCSs) has gradually become an active research area due to the advantages of using networked media in many aspects such as the ease of maintenance and installation, the large flexibility and the low cost. It is well known that the devices in networks are mutually connected via communication cables that are of limited capacity. Therefore, some network-induced phenomena have inevitably emerged in the areas of signal processing and control engineering. These phenomena include, but are not limited to, network-induced communication delays, missing data, signal quantization, saturations, and channel fading. It is of great importance to understand how these phenomena influence the closed-loop stability and performance properties

    Periodic Event-Triggered Sampling and Dual-Rate Control for a Wireless Networked Control System With Applications to UAVs

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."[EN] In this paper, periodic event-triggered sampling and dual-rate control techniques are integrated in a wireless networked control system (WNCS), where time-varying network-induced delays and packet disorder are present. Compared to the conventional time-triggered sampling paradigm, the control solution is able to considerably reduce network utilization (number of transmissions), while retaining a satisfactory control performance. Stability for the proposed WNCS is ensured using linear matrix inequalities. Simulation results show the main benefits of the control approach, which are experimentally validated by means of an unmanned-aerial-vehicle-based test-bed platform.This work was supported in part by the European Commission as part of Project H2020-SEC-2016-2017-Topic: SEC-20-BES-2016 (Id: 740736)-"C2 Advanced Multi-domain Environment and Live Observation Technologies," in part by the European Regional Development Fund as part of OPZuid 2014-2020 under the Drone Safety Cluster project, in part by the Innovational Research Incentives Scheme under the VICI Grant "Wireless control systems: A new frontier in automation" (No. 11382) awarded by The Netherlands Organization for Scientific Research Applied and Engineering Sciences, and in part by the Ministerio de Economia y Competitividad, Spain, under Project FPU15/02008.Cuenca, Á.; Antunes, D.; Castillo-Frasquet, A.; García Gil, PJ.; Asadi Khashooei, B.; Heemels, W. (2019). Periodic Event-Triggered Sampling and Dual-Rate Control for a Wireless Networked Control System With Applications to UAVs. IEEE Transactions on Industrial Electronics. 66(4):3157-3166. https://doi.org/10.1109/TIE.2018.2850018S3157316666

    Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning

    Get PDF
    Conventional closed‐form solution to the optimal control problem using optimal control theory is only available under the assumption that there are known system dynamics/models described as differential equations. Without such models, reinforcement learning (RL) as a candidate technique has been successfully applied to iteratively solve the optimal control problem for unknown or varying systems. For the optimal tracking control problem, existing RL techniques in the literature assume either the use of a predetermined feedforward input for the tracking control, restrictive assumptions on the reference model dynamics, or discounted tracking costs. Furthermore, by using discounted tracking costs, zero steady‐state error cannot be guaranteed by the existing RL methods. This article therefore presents an optimal online RL tracking control framework for discrete‐time (DT) systems, which does not impose any restrictive assumptions of the existing methods and equally guarantees zero steady‐state tracking error. This is achieved by augmenting the original system dynamics with the integral of the error between the reference inputs and the tracked outputs for use in the online RL framework. It is further shown that the resulting value function for the DT linear quadratic tracker using the augmented formulation with integral control is also quadratic. This enables the development of Bellman equations, which use only the system measurements to solve the corresponding DT algebraic Riccati equation and obtain the optimal tracking control inputs online. Two RL strategies are thereafter proposed based on both the value function approximation and the Q‐learning along with bounds on excitation for the convergence of the parameter estimates. Simulation case studies show the effectiveness of the proposed approach

    Optimal Sequence-Based Control of Networked Linear Systems

    Get PDF
    In Networked Control Systems (NCS), components of a control loop are connected by data networks that may introduce time-varying delays and packet losses into the system, which can severly degrade control performance. Hence, this book presents the newly developed S-LQG (Sequence-Based Linear Quadratic Gaussian) controller that combines the sequence-based control method with the well-known LQG approach to stochastic optimal control in order to compensate for the network-induced effects

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv
    corecore