18,224 research outputs found
Event-triggered Learning for Resource-efficient Networked Control
Common event-triggered state estimation (ETSE) algorithms save communication
in networked control systems by predicting agents' behavior, and transmitting
updates only when the predictions deviate significantly. The effectiveness in
reducing communication thus heavily depends on the quality of the dynamics
models used to predict the agents' states or measurements. Event-triggered
learning is proposed herein as a novel concept to further reduce communication:
whenever poor communication performance is detected, an identification
experiment is triggered and an improved prediction model learned from data.
Effective learning triggers are obtained by comparing the actual communication
rate with the one that is expected based on the current model. By analyzing
statistical properties of the inter-communication times and leveraging powerful
convergence results, the proposed trigger is proven to limit learning
experiments to the necessary instants. Numerical and physical experiments
demonstrate that event-triggered learning improves robustness toward changing
environments and yields lower communication rates than common ETSE.Comment: 7 pages, 4 figures, to appear in the 2018 American Control Conference
(ACC
Event-triggered Learning
The efficient exchange of information is an essential aspect of intelligent
collective behavior. Event-triggered control and estimation achieve some
efficiency by replacing continuous data exchange between agents with
intermittent, or event-triggered communication. Typically, model-based
predictions are used at times of no data transmission, and updates are sent
only when the prediction error grows too large. The effectiveness in reducing
communication thus strongly depends on the quality of the prediction model. In
this article, we propose event-triggered learning as a novel concept to reduce
communication even further and to also adapt to changing dynamics. By
monitoring the actual communication rate and comparing it to the one that is
induced by the model, we detect a mismatch between model and reality and
trigger model learning when needed. Specifically, for linear Gaussian dynamics,
we derive different classes of learning triggers solely based on a statistical
analysis of inter-communication times and formally prove their effectiveness
with the aid of concentration inequalities
Deep Reinforcement Learning for Event-Triggered Control
Event-triggered control (ETC) methods can achieve high-performance control
with a significantly lower number of samples compared to usual, time-triggered
methods. These frameworks are often based on a mathematical model of the system
and specific designs of controller and event trigger. In this paper, we show
how deep reinforcement learning (DRL) algorithms can be leveraged to
simultaneously learn control and communication behavior from scratch, and
present a DRL approach that is particularly suitable for ETC. To our knowledge,
this is the first work to apply DRL to ETC. We validate the approach on
multiple control tasks and compare it to model-based event-triggering
frameworks. In particular, we demonstrate that it can, other than many
model-based ETC designs, be straightforwardly applied to nonlinear systems
Event-triggered Pulse Control with Model Learning (if Necessary)
In networked control systems, communication is a shared and therefore scarce
resource. Event-triggered control (ETC) can achieve high performance control
with a significantly reduced amount of samples compared to classical, periodic
control schemes. However, ETC methods usually rely on the availability of an
accurate dynamics model, which is oftentimes not readily available. In this
paper, we propose a novel event-triggered pulse control strategy that learns
dynamics models if necessary. In addition to adapting to changing dynamics, the
method also represents a suitable replacement for the integral part typically
used in periodic control.Comment: Accepted final version to appear in: Proc. of the American Control
Conference, 201
Resource-aware IoT Control: Saving Communication through Predictive Triggering
The Internet of Things (IoT) interconnects multiple physical devices in
large-scale networks. When the 'things' coordinate decisions and act
collectively on shared information, feedback is introduced between them.
Multiple feedback loops are thus closed over a shared, general-purpose network.
Traditional feedback control is unsuitable for design of IoT control because it
relies on high-rate periodic communication and is ignorant of the shared
network resource. Therefore, recent event-based estimation methods are applied
herein for resource-aware IoT control allowing agents to decide online whether
communication with other agents is needed, or not. While this can reduce
network traffic significantly, a severe limitation of typical event-based
approaches is the need for instantaneous triggering decisions that leave no
time to reallocate freed resources (e.g., communication slots), which hence
remain unused. To address this problem, novel predictive and self triggering
protocols are proposed herein. From a unified Bayesian decision framework, two
schemes are developed: self triggers that predict, at the current triggering
instant, the next one; and predictive triggers that check at every time step,
whether communication will be needed at a given prediction horizon. The
suitability of these triggers for feedback control is demonstrated in hardware
experiments on a cart-pole, and scalability is discussed with a multi-vehicle
simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of
Things Journal. arXiv admin note: text overlap with arXiv:1609.0753
- …