8 research outputs found

    Event-triggered Pulse Control with Model Learning (if Necessary)

    Full text link
    In networked control systems, communication is a shared and therefore scarce resource. Event-triggered control (ETC) can achieve high performance control with a significantly reduced amount of samples compared to classical, periodic control schemes. However, ETC methods usually rely on the availability of an accurate dynamics model, which is oftentimes not readily available. In this paper, we propose a novel event-triggered pulse control strategy that learns dynamics models if necessary. In addition to adapting to changing dynamics, the method also represents a suitable replacement for the integral part typically used in periodic control.Comment: Accepted final version to appear in: Proc. of the American Control Conference, 201

    Deep Reinforcement Learning for Event-Triggered Control

    Full text link
    Event-triggered control (ETC) methods can achieve high-performance control with a significantly lower number of samples compared to usual, time-triggered methods. These frameworks are often based on a mathematical model of the system and specific designs of controller and event trigger. In this paper, we show how deep reinforcement learning (DRL) algorithms can be leveraged to simultaneously learn control and communication behavior from scratch, and present a DRL approach that is particularly suitable for ETC. To our knowledge, this is the first work to apply DRL to ETC. We validate the approach on multiple control tasks and compare it to model-based event-triggering frameworks. In particular, we demonstrate that it can, other than many model-based ETC designs, be straightforwardly applied to nonlinear systems

    Learning Event-triggered Control from Data through Joint Optimization

    Full text link
    We present a framework for model-free learning of event-triggered control strategies. Event-triggered methods aim to achieve high control performance while only closing the feedback loop when needed. This enables resource savings, e.g., network bandwidth if control commands are sent via communication networks, as in networked control systems. Event-triggered controllers consist of a communication policy, determining when to communicate, and a control policy, deciding what to communicate. It is essential to jointly optimize the two policies since individual optimization does not necessarily yield the overall optimal solution. To address this need for joint optimization, we propose a novel algorithm based on hierarchical reinforcement learning. The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems. The method's applicability to real-world scenarios is demonstrated through experiments on a six degrees of freedom real-time controlled manipulator. Further, we propose an approach towards evaluating the stability of the learned neural network policies

    Event-Triggered Optimal Neuro-Controller Design with Reinforcement Learning for Unknown Nonlinear Systems

    No full text
    This paper develops an optimal control scheme for continuous-time unknown nonlinear systems using the event-triggering mechanism. Different from designing controllers using the time-triggering mechanism, the event-triggered controller is updated only when the system state deviates more than a certain threshold from a prescribed value. To obtain the event-triggered optimal controller, we develop an identifier-critic architecture under the framework of reinforcement learning. The identifier network, composed of a feedforward neural network (FNN), aims to derive the knowledge of unknown system dynamics, and the critic network, constituted of an FNN, intends to derive the event-triggered optimal controller. The identifier network is tuned via the combination of a standard back-propagation algorithm and an e-modification method, and the critic network is updated using a modification of the gradient descent method. By introducing an additional stability term to update the critic network, the initial admissible control is no longer required. Meanwhile, by using historical and instantaneous state data together, the persistence of excitation condition is relaxed. A stability analysis of the closed-loop system is provided based on the Lyapunov method. The effectiveness of the proposed designs is illustrated through simulations of a nonlinear example and a single link robot arm system

    Event-Triggered Optimal Neuro-Controller Design With Reinforcement Learning for Unknown Nonlinear Systems

    No full text
    corecore