33,849 research outputs found

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    An Efficient Requirement-Aware Attachment Policy for Future Millimeter Wave Vehicular Networks

    Full text link
    The automotive industry is rapidly evolving towards connected and autonomous vehicles, whose ever more stringent data traffic requirements might exceed the capacity of traditional technologies for vehicular networks. In this scenario, densely deploying millimeter wave (mmWave) base stations is a promising approach to provide very high transmission speeds to the vehicles. However, mmWave signals suffer from high path and penetration losses which might render the communication unreliable and discontinuous. Coexistence between mmWave and Long Term Evolution (LTE) communication systems has therefore been considered to guarantee increased capacity and robustness through heterogeneous networking. Following this rationale, we face the challenge of designing fair and efficient attachment policies in heterogeneous vehicular networks. Traditional methods based on received signal quality criteria lack consideration of the vehicle's individual requirements and traffic demands, and lead to suboptimal resource allocation across the network. In this paper we propose a Quality-of-Service (QoS) aware attachment scheme which biases the cell selection as a function of the vehicular service requirements, preventing the overload of transmission links. Our simulations demonstrate that the proposed strategy significantly improves the percentage of vehicles satisfying application requirements and delivers efficient and fair association compared to state-of-the-art schemes.Comment: 8 pages, 8 figures, 2 tables, accepted to the 30th IEEE Intelligent Vehicles Symposiu

    Nuts and Bolts of a Realistic Stochastic Geometric Analysis of mmWave HetNets: Hardware Impairments and Channel Aging

    Get PDF
    © 2019 IEEE.Motivated by heterogeneous network (HetNet) design in improving coverage and by millimeter-wave (mmWave) transmission offering an abundance of extra spectrum, we present a general analytical framework shedding light on the downlink of realistic mmWave HetNets consisting of K tiers of randomly located base stations. Specifically, we model, by virtue of stochastic geometry tools, the multi-Tier multi-user (MU) multiple-input multiple-output (MIMO) mmWave network degraded by the inevitable residual additive transceiver hardware impairments (RATHIs) and channel aging. Given this setting, we derive the coverage probability and the area spectral efficiency (ASE), and we subsequently evaluate the impact of residual transceiver hardware impairments and channel aging on these metrics. Different path-loss laws for line-of-sight and non-line-of-sight are accounted for the analysis, which are among the distinguishing features of mmWave systems. Among the findings, we show that the RATHIs have a meaningful impact at the high-signal-To-noise-ratio regime, while the transmit additive distortion degrades further than the receive distortion the system performance. Moreover, serving fewer users proves to be preferable, and the more directive the mmWaves are, the higher the ASE becomes.Peer reviewedFinal Accepted Versio
    • …
    corecore