7,845 research outputs found

    Resource-aware IoT Control: Saving Communication through Predictive Triggering

    Full text link
    The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of Things Journal. arXiv admin note: text overlap with arXiv:1609.0753

    Event-triggered Learning

    Full text link
    The efficient exchange of information is an essential aspect of intelligent collective behavior. Event-triggered control and estimation achieve some efficiency by replacing continuous data exchange between agents with intermittent, or event-triggered communication. Typically, model-based predictions are used at times of no data transmission, and updates are sent only when the prediction error grows too large. The effectiveness in reducing communication thus strongly depends on the quality of the prediction model. In this article, we propose event-triggered learning as a novel concept to reduce communication even further and to also adapt to changing dynamics. By monitoring the actual communication rate and comparing it to the one that is induced by the model, we detect a mismatch between model and reality and trigger model learning when needed. Specifically, for linear Gaussian dynamics, we derive different classes of learning triggers solely based on a statistical analysis of inter-communication times and formally prove their effectiveness with the aid of concentration inequalities

    Event-based State Estimation: An Emulation-based Approach

    Full text link
    An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.Comment: 21 pages, 8 figures, this article is based on the technical report arXiv:1511.05223 and is accepted for publication in IET Control Theory & Application

    Deep Reinforcement Learning for Event-Triggered Control

    Full text link
    Event-triggered control (ETC) methods can achieve high-performance control with a significantly lower number of samples compared to usual, time-triggered methods. These frameworks are often based on a mathematical model of the system and specific designs of controller and event trigger. In this paper, we show how deep reinforcement learning (DRL) algorithms can be leveraged to simultaneously learn control and communication behavior from scratch, and present a DRL approach that is particularly suitable for ETC. To our knowledge, this is the first work to apply DRL to ETC. We validate the approach on multiple control tasks and compare it to model-based event-triggering frameworks. In particular, we demonstrate that it can, other than many model-based ETC designs, be straightforwardly applied to nonlinear systems

    Smart container monitoring using custom-made WSN technology : from business case to prototype

    Get PDF
    This paper reports on the development of a prototype solution for tracking and monitoring shipping containers. Deploying wireless sensor networks (WSNs) in an operational environment remains a challenging task. We strongly believe that standardized methodologies and tools could enhance future WSN deployments and enable rapid prototype development. Therefore, we choose to use a step-by-step approach where each step gives us more insight in the problem at hand while shielding some of the complexity of the final solution. We observed that environment emulation is of the utmost importance, especially for harsh wireless conditions inside a container stacking. This lead us to extend our test lab with wireless link emulation capabilities. It is also essential to assess feasibility of concepts and design choices after every stage during prototype development. This enabled us to create innovative WSN solutions, including a multi-MAC framework and a robust gateway selection algorithm

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    Autonomous Energy-aware production systems control

    Get PDF
    Energy and resource efficiency has recently become one of the most relevant topics of research in manufacturing, both as industry accounts for a major part of the world energy consumption and in the context of the increasing attention to the need of sustainable development at planetary level. This work aims at paving the way to the development of novel energy-aware control policies of production systems, by means of autonomous decisions about their states in terms of production and energy consumption, exploiting the possibilities given by the new ICT technologies, such as Internet of Things and cloud computing, which allow seamless information sharing among the machines through an appropriate and standardized ICT infrastructure. The energy saving control approach investigated in this work exploits the current trend in research to reduce the idle time of machines in favor of stand-by states, obtaining significant savings in terms of energy, by allowing novel solutions for decentralized control. The proposed control enables the production machines to autonomously share with and process the information of the other machines in the system to decide in real-time their specific energy behaviour, even postponing processing if that is possible. The approach adopted includes conceptual development of the dynamic behaviour models of the system and the proposed policies, then their deployment in an application scenario taken by actual industry cases and data, enabling study of the performance of the system with a detailed design of experiments. The proposed approach represents a significant contribution to the state of the art, as the proposed energy-aware control enables decisions based on real-time information instead of statistically-based forecasts of part arrival rates, as in the previous literature; furthermore the approach is of relevant value for the practitioner, especially as it paves the way to an operationalization to the vision of Cyber-Physical Systems and Industry 4.0
    corecore