6,952 research outputs found

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    An Efficient Deep Learning Framework for Intelligent Energy Management in IoT Networks

    Full text link
    [EN] Green energy management is an economical solution for better energy usage, but the employed literature lacks focusing on the potentials of edge intelligence in controllable Internet of Things (IoT). Therefore, in this article, we focus on the requirements of todays' smart grids, homes, and industries to propose a deep-learning-based framework for intelligent energy management. We predict future energy consumption for short intervals of time as well as provide an efficient way of communication between energy distributors and consumers. The key contributions include edge devices-based real-time energy management via common cloud-based data supervising server, optimal normalization technique selection, and a novel sequence learning-based energy forecasting mechanism with reduced time complexity and lowest error rates. In the proposed framework, edge devices relate to a common cloud server in an IoT network that communicates with the associated smart grids to effectively continue the energy demand and response phenomenon. We apply several preprocessing techniques to deal with the diverse nature of electricity data, followed by an efficient decision-making algorithm for short-term forecasting and implement it over resource-constrained devices. We perform extensive experiments and witness 0.15 and 3.77 units reduced mean-square error (MSE) and root MSE (RMSE) for residential and commercial datasets, respectively.This work was supported in part by the National Research Foundation of Korea Grant Funded by the Korea Government (MSIT) under Grant 2019M3F2A1073179; in part by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" Within the Project under Grant TIN2017-84802-C2-1-P; and in part by the European Union through the ERANETMED (Euromediterranean Cooperation through ERANET Joint Activities and Beyond) Project ERANETMED3-227 SMARTWATIR.Han, T.; Muhammad, K.; Hussain, T.; Lloret, J.; Baik, SW. (2021). An Efficient Deep Learning Framework for Intelligent Energy Management in IoT Networks. IEEE Internet of Things. 8(5):3170-3179. https://doi.org/10.1109/JIOT.2020.3013306S317031798

    Distributed Training Large-Scale Deep Architectures

    Full text link
    Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Via lessons learned from our routine benchmarking effort, we first identify bottlenecks and overheads that hinter data parallelism. We then devise guidelines that help practitioners to configure an effective system and fine-tune parameters to achieve desired speedup. Specifically, we develop a procedure for setting minibatch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training
    corecore