231 research outputs found
Resource-aware IoT Control: Saving Communication through Predictive Triggering
The Internet of Things (IoT) interconnects multiple physical devices in
large-scale networks. When the 'things' coordinate decisions and act
collectively on shared information, feedback is introduced between them.
Multiple feedback loops are thus closed over a shared, general-purpose network.
Traditional feedback control is unsuitable for design of IoT control because it
relies on high-rate periodic communication and is ignorant of the shared
network resource. Therefore, recent event-based estimation methods are applied
herein for resource-aware IoT control allowing agents to decide online whether
communication with other agents is needed, or not. While this can reduce
network traffic significantly, a severe limitation of typical event-based
approaches is the need for instantaneous triggering decisions that leave no
time to reallocate freed resources (e.g., communication slots), which hence
remain unused. To address this problem, novel predictive and self triggering
protocols are proposed herein. From a unified Bayesian decision framework, two
schemes are developed: self triggers that predict, at the current triggering
instant, the next one; and predictive triggers that check at every time step,
whether communication will be needed at a given prediction horizon. The
suitability of these triggers for feedback control is demonstrated in hardware
experiments on a cart-pole, and scalability is discussed with a multi-vehicle
simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of
Things Journal. arXiv admin note: text overlap with arXiv:1609.0753
Deep Reinforcement Learning for Event-Triggered Control
Event-triggered control (ETC) methods can achieve high-performance control
with a significantly lower number of samples compared to usual, time-triggered
methods. These frameworks are often based on a mathematical model of the system
and specific designs of controller and event trigger. In this paper, we show
how deep reinforcement learning (DRL) algorithms can be leveraged to
simultaneously learn control and communication behavior from scratch, and
present a DRL approach that is particularly suitable for ETC. To our knowledge,
this is the first work to apply DRL to ETC. We validate the approach on
multiple control tasks and compare it to model-based event-triggering
frameworks. In particular, we demonstrate that it can, other than many
model-based ETC designs, be straightforwardly applied to nonlinear systems
Event-triggered Pulse Control with Model Learning (if Necessary)
In networked control systems, communication is a shared and therefore scarce
resource. Event-triggered control (ETC) can achieve high performance control
with a significantly reduced amount of samples compared to classical, periodic
control schemes. However, ETC methods usually rely on the availability of an
accurate dynamics model, which is oftentimes not readily available. In this
paper, we propose a novel event-triggered pulse control strategy that learns
dynamics models if necessary. In addition to adapting to changing dynamics, the
method also represents a suitable replacement for the integral part typically
used in periodic control.Comment: Accepted final version to appear in: Proc. of the American Control
Conference, 201
Towards remote fault detection by analyzing communication priorities
The ability to detect faults is an important safety feature for event-based
multi-agent systems. In most existing algorithms, each agent tries to detect
faults by checking its own behavior. But what if one agent becomes unable to
recognize misbehavior, for example due to failure in its onboard fault
detection? To improve resilience and avoid propagation of individual errors to
the multi-agent system, agents should check each other remotely for malfunction
or misbehavior. In this paper, we build upon a recently proposed predictive
triggering architecture that involves communication priorities shared
throughout the network to manage limited bandwidth. We propose a fault
detection method that uses these priorities to detect errors in other agents.
The resulting algorithms is not only able to detect faults, but can also run on
a low-power microcontroller in real-time, as we demonstrate in hardware
experiments
- …