201,527 research outputs found
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
Automatic decision-making approaches, such as reinforcement learning (RL),
have been applied to (partially) solve the resource allocation problem
adaptively in the cloud computing system. However, a complete cloud resource
allocation framework exhibits high dimensions in state and action spaces, which
prohibit the usefulness of traditional RL techniques. In addition, high power
consumption has become one of the critical concerns in design and control of
cloud computing systems, which degrades system reliability and increases
cooling cost. An effective dynamic power management (DPM) policy should
minimize power consumption while maintaining performance degradation within an
acceptable level. Thus, a joint virtual machine (VM) resource allocation and
power management framework is critical to the overall cloud computing system.
Moreover, novel solution framework is necessary to address the even higher
dimensions in state and action spaces. In this paper, we propose a novel
hierarchical framework for solving the overall resource allocation and power
management problem in cloud computing systems. The proposed hierarchical
framework comprises a global tier for VM resource allocation to the servers and
a local tier for distributed power management of local servers. The emerging
deep reinforcement learning (DRL) technique, which can deal with complicated
control problems with large state space, is adopted to solve the global tier
problem. Furthermore, an autoencoder and a novel weight sharing structure are
adopted to handle the high-dimensional state space and accelerate the
convergence speed. On the other hand, the local tier of distributed server
power managements comprises an LSTM based workload predictor and a model-free
RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed
Computing (ICDCS 2017
Recommended from our members
Reinforcement Learning for Hybrid and Plug-In Hybrid Electric Vehicle Energy Management: Recent Advances and Prospects
The Power of Online Learning in Stochastic Network Optimization
In this paper, we investigate the power of online learning in stochastic
network optimization with unknown system statistics {\it a priori}. We are
interested in understanding how information and learning can be efficiently
incorporated into system control techniques, and what are the fundamental
benefits of doing so. We propose two \emph{Online Learning-Aided Control}
techniques, and , that explicitly utilize the
past system information in current system control via a learning procedure
called \emph{dual learning}. We prove strong performance guarantees of the
proposed algorithms: and achieve the
near-optimal utility-delay tradeoff
and possesses an convergence time.
and are probably the first algorithms that
simultaneously possess explicit near-optimal delay guarantee and sub-linear
convergence time. Simulation results also confirm the superior performance of
the proposed algorithms in practice. To the best of our knowledge, our attempt
is the first to explicitly incorporate online learning into stochastic network
optimization and to demonstrate its power in both theory and practice
The Power of Online Learning in Stochastic Network Optimization
In this paper, we investigate the power of online learning in stochastic
network optimization with unknown system statistics {\it a priori}. We are
interested in understanding how information and learning can be efficiently
incorporated into system control techniques, and what are the fundamental
benefits of doing so. We propose two \emph{Online Learning-Aided Control}
techniques, and , that explicitly utilize the
past system information in current system control via a learning procedure
called \emph{dual learning}. We prove strong performance guarantees of the
proposed algorithms: and achieve the
near-optimal utility-delay tradeoff
and possesses an convergence time.
and are probably the first algorithms that
simultaneously possess explicit near-optimal delay guarantee and sub-linear
convergence time. Simulation results also confirm the superior performance of
the proposed algorithms in practice. To the best of our knowledge, our attempt
is the first to explicitly incorporate online learning into stochastic network
optimization and to demonstrate its power in both theory and practice
Distributed Stochastic Market Clearing with High-Penetration Wind Power
Integrating renewable energy into the modern power grid requires
risk-cognizant dispatch of resources to account for the stochastic availability
of renewables. Toward this goal, day-ahead stochastic market clearing with
high-penetration wind energy is pursued in this paper based on the DC optimal
power flow (OPF). The objective is to minimize the social cost which consists
of conventional generation costs, end-user disutility, as well as a risk
measure of the system re-dispatching cost. Capitalizing on the conditional
value-at-risk (CVaR), the novel model is able to mitigate the potentially high
risk of the recourse actions to compensate wind forecast errors. The resulting
convex optimization task is tackled via a distribution-free sample average
based approximation to bypass the prohibitively complex high-dimensional
integration. Furthermore, to cope with possibly large-scale dispatchable loads,
a fast distributed solver is developed with guaranteed convergence using the
alternating direction method of multipliers (ADMM). Numerical results tested on
a modified benchmark system are reported to corroborate the merits of the novel
framework and proposed approaches.Comment: To appear in IEEE Transactions on Power Systems; 12 pages and 9
figure
- …