33,689 research outputs found
Data Centers as Dispatchable Loads to Harness Stranded Power
We analyze how both traditional data center integration and dispatchable load
integration affect power grid efficiency. We use detailed network models,
parallel optimization solvers, and thousands of renewable generation scenarios
to perform our analysis. Our analysis reveals that significant spillage and
stranded power will be observed in power grids as wind power levels are
increased. A counter-intuitive finding is that collocating data centers with
inflexible loads next to wind farms has limited impacts on renewable portfolio
standard (RPS) goals because it provides limited system-level flexibility and
can in fact increase stranded power and fossil-fueled generation. In contrast,
optimally placing data centers that are dispatchable (with flexible loads)
provides system-wide flexibility, reduces stranded power, and improves
efficiency. In short, optimally placed dispatchable computing loads can enable
better scaling to high RPS. We show that these dispatchable computing loads are
powered to 60~80\% of their requested capacity, indicating that there are
significant economic incentives provided by stranded power
Data Center Cost Optimization Via Workload Modulation Under Real-World Electricity Pricing
We formulate optimization problems to study how data centers might modulate
their power demands for cost-effective operation taking into account three key
complex features exhibited by real-world electricity pricing schemes: (i)
time-varying prices (e.g., time-of-day pricing, spot pricing, or higher energy
prices during coincident peaks) and (ii) separate charge for peak power
consumption. Our focus is on demand modulation at the granularity of an entire
data center or a large part of it. For computational tractability reasons, we
work with a fluid model for power demands which we imagine can be modulated
using two abstract knobs of demand dropping and demand delaying (each with its
associated penalties or costs). Given many data center workloads and electric
prices can be effectively predicted using statistical modeling techniques, we
devise a stochastic dynamic program (SDP) that can leverage such predictive
models. Since the SDP can be computationally infeasible in many real platforms,
we devise approximations for it. We also devise fully online algorithms that
might be useful for scenarios with poor power demand or utility price
predictability. For one of our online algorithms, we prove a competitive ratio
of 2-1/n. Finally, using empirical evaluation with both real-world and
synthetic power demands and real-world prices, we demonstrate the efficacy of
our techniques. As two salient empirically-gained insights: (i) demand delaying
is more effective than demand dropping regarding to peak shaving (e.g., 10.74%
cost saving with only delaying vs. 1.45% with only dropping for Google
workload) and (ii) workloads tend to have different cost saving potential under
various electricity tariffs (e.g., 16.97% cost saving under peak-based tariff
vs. 1.55% under time-varying pricing tariff for Facebook workload)
Distributed Real-Time HVAC Control for Cost-Efficient Commercial Buildings under Smart Grid Environment
In this paper, we investigate the problem of minimizing the long-term total
cost (i.e., the sum of energy cost and thermal discomfort cost) associated with
a Heating, Ventilation, and Air Conditioning (HVAC) system of a multizone
commercial building under smart grid environment. To be specific, we first
formulate a stochastic program to minimize the time average expected total cost
with the consideration of uncertainties in electricity price, outdoor
temperature, the most comfortable temperature level, and external thermal
disturbance. Due to the existence of temporally and spatially coupled
constraints as well as unknown information about the future system parameters,
it is very challenging to solve the formulated problem. To this end, we propose
a realtime HVAC control algorithm based on the framework of Lyapunov
optimization techniques without the need to predict any system parameters and
know their stochastic information. The key idea of the proposed algorithm is to
construct and stabilize virtual queues associated with indoor temperatures of
all zones. Moreover, we provide a distributed implementation of the proposed
realtime algorithm with the aim of protecting user privacy and enhancing
algorithmic scalability. Extensive simulation results based on real-world
traces show that the proposed algorithm could reduce energy cost effectively
with small sacrifice in thermal comfort.Comment: 11 pages, 16 figures, accepted to appear in IEEE Internet of Things
Journa
Online Learning for Offloading and Autoscaling in Energy Harvesting Mobile Edge Computing
Mobile edge computing (a.k.a. fog computing) has recently emerged to enable
in-situ processing of delay-sensitive applications at the edge of mobile
networks. Providing grid power supply in support of mobile edge computing,
however, is costly and even infeasible (in certain rugged or under-developed
areas), thus mandating on-site renewable energy as a major or even sole power
supply in increasingly many scenarios. Nonetheless, the high intermittency and
unpredictability of renewable energy make it very challenging to deliver a high
quality of service to users in energy harvesting mobile edge computing systems.
In this paper, we address the challenge of incorporating renewables into mobile
edge computing and propose an efficient reinforcement learning-based resource
management algorithm, which learns on-the-fly the optimal policy of dynamic
workload offloading (to the centralized cloud) and edge server provisioning to
minimize the long-term system cost (including both service delay and
operational cost). Our online learning algorithm uses a decomposition of the
(offline) value iteration and (online) reinforcement learning, thus achieving a
significant improvement of learning rate and run-time performance when compared
to standard reinforcement learning algorithms such as Q-learning. We prove the
convergence of the proposed algorithm and analytically show that the learned
policy has a simple monotone structure amenable to practical implementation.
Our simulation results validate the efficacy of our algorithm, which
significantly improves the edge computing performance compared to fixed or
myopic optimization schemes and conventional reinforcement learning algorithms.Comment: arXiv admin note: text overlap with arXiv:1701.01090 by other author
Distributionally Robust Chance Constrained Programming with Generative Adversarial Networks (GANs)
This paper presents a novel deep learning based data-driven optimization
method. A novel generative adversarial network (GAN) based data-driven
distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data
in a nonparametric and unsupervised way without a priori approximation or
assumption. Since GAN utilizes deep neural networks, complicated data
distributions and modes can be learned, and it can model uncertainty
efficiently and accurately. Distributionally robust chance constrained
programming takes into consideration ambiguous probability distributions of
uncertain parameters. To tackle the computational challenges, sample average
approximation method is adopted, and the required data samples are generated by
GAN in an end-to-end way through the differentiable networks. The proposed
framework is then applied to supply chain optimization under demand
uncertainty. The applicability of the proposed approach is illustrated through
a county-level case study of a spatially explicit biofuel supply chain in
Illinois
Two-Scale Stochastic Control for Multipoint Communication Systems with Renewables
Increasing threats of global warming and climate changes call for an
energy-efficient and sustainable design of future wireless communication
systems. To this end, a novel two-scale stochastic control framework is put
forth for smart-grid powered coordinated multi-point (CoMP) systems. Taking
into account renewable energy sources (RES), dynamic pricing, two-way energy
trading facilities and imperfect energy storage devices, the energy management
task is formulated as an infinite-horizon optimization problem minimizing the
time-average energy transaction cost, subject to the users' quality of service
(QoS) requirements. Leveraging the Lyapunov optimization approach as well as
the stochastic subgradient method, a two-scale online control (TS-OC) approach
is developed for the resultant smart-grid powered CoMP systems. Using only
historical data, the proposed TS-OC makes online control decisions at two
timescales, and features a provably feasible and asymptotically near-optimal
solution. Numerical tests further corroborate the theoretical analysis, and
demonstrate the merits of the proposed approach.Comment: 10 pages, 7 figure
On Coordination of Smart Grid and Cooperative Cloud Providers
Cooperative cloud providers in the form of cloud federations can potentially
reduce their energy costs by exploiting electricity price fluctuations across
different locations. In this environment, on the one hand, the electricity
price has a significant influence on the federations formed, and, thus, on the
profit earned by the cloud providers, and on the other hand, the cloud
cooperation has an inevitable impact on the performance of the smart grid. In
this regard, the interaction between independent cloud providers and the smart
grid is modeled as a two-stage Stackelberg game interleaved with a coalitional
game in this paper. In this game, in the first stage the smart grid, as a
leader chooses a proper electricity pricing mechanism to maximize its own
profit. In the second stage, cloud providers cooperatively manage their
workload to minimize their electricity costs. Given the dynamic of cloud
providers in the federation formation process, an optimization model based on a
constrained Markov decision process (CMDP) has been used by the smart grid to
achieve the optimal policy. Numerical results show that the proposed solution
yields around 28% and 29% profit improvement on average for the smart grid, and
the cloud providers, respectively, compared to the noncooperative schem
Communication-Constrained Expansion Planning for Resilient Distribution Systems
Distributed generation and remotely controlled switches have emerged as
important technologies to improve the resiliency of distribution grids against
extreme weather-related disturbances. Therefore it becomes impor- tant to study
how best to place them on the grid in order to meet a resiliency criteria,
while minimizing costs and capturing their dependencies on the associated
communication systems that sustains their distributed operations. This paper
introduces the Optimal Resilient Design Problem for Distribution and Communi-
cation Systems (ORDPDC) to address this need. The ORDPDC is formulated as a
two-stage stochastic mixed-integer program that captures the physical laws of
distribution systems, the communication connec- tivity of the smart grid
components, and a set of scenarios which specifies which components are
affected by potential disasters. The paper proposes an exact branch-and-price
algorithm for the ORDPDC which features a strong lower bound and a variety of
acceleration schemes to address degeneracy. The ORDPDC model and
branch-and-price algorithm were evaluated on a variety of test cases with
varying disaster inten- sities and network topologies. The results demonstrate
the significant impact of the network topologies on the expansion plans and
costs, as well as the computational benefits of the proposed approach
Control of Generalized Energy Storage Networks
The integration of intermittent and volatile renewable energy resources
requires increased flexibility in the operation of the electric grid. Storage,
broadly speaking, provides the flexibility of shifting energy over time;
network, on the other hand, provides the flexibility of shifting energy over
geographical locations. The optimal control of general storage networks in
uncertain environments is an important open problem. The key challenge is that,
even in small networks, the corresponding constrained stochastic control
problems with continuous spaces suffer from curses of dimensionality, and are
intractable in general settings. For large networks, no efficient algorithm is
known to give optimal or near-optimal performance. This paper provides an
efficient and provably near-optimal algorithm to solve this problem in a very
general setting. We study the optimal control of generalized storage networks,
i.e., electric networks connected to distributed generalized storages. Here
generalized storage is a unifying dynamic model for many components of the grid
that provide the functionality of shifting energy over time, ranging from
standard energy storage devices to deferrable or thermostatically controlled
loads. An online algorithm is devised for the corresponding constrained
stochastic control problem based on the theory of Lyapunov optimization. We
prove that the algorithm is near-optimal, and construct a semidefinite program
to min- imize the sub-optimality bound. The resulting bound is a constant that
depends only on the parameters of the storage network and cost functions, and
is independent of uncertainty realizations. Numerical examples are given to
demonstrate the effectiveness of the algorithm.Comment: This report, written in January 2014, is a longer version of the
conference paper [1] (See references in the report). This version contains a
somewhat more general treatment for the cases with sub-differentiable
objective functions and Markov disturbance. arXiv admin note: substantial
text overlap with arXiv:1405.778
Distributed and Efficient Resource Balancing Among Many Suppliers and Consumers
Achieving a balance of supply and demand in a multi-agent system with many
individual self-interested and rational agents that act as suppliers and
consumers is a natural problem in a variety of real-life domains---smart power
grids, data centers, and others. In this paper, we address the
profit-maximization problem for a group of distributed supplier and consumer
agents, with no inter-agent communication. We simulate a scenario of a market
with suppliers and consumers such that at every instant, each supplier
agent supplies a certain quantity and simultaneously, each consumer agent
consumes a certain quantity. The information about the total amount supplied
and consumed is only kept with the center. The proposed algorithm is a
combination of the classical additive-increase multiplicative-decrease (AIMD)
algorithm in conjunction with a probabilistic rule for the agents to respond to
a capacity signal. This leads to a nonhomogeneous Markov chain and we show
almost sure convergence of this chain to the social optimum, for our market of
distributed supplier and consumer agents. Employing this AIMD-type algorithm,
the center sends a feedback message to the agents in the supplier side if there
is a scenario of excess supply, or to the consumer agents if there is excess
consumption. Each agent has a concave utility function whose derivative tends
to 0 when an optimum quantity is supplied/consumed. Hence when social
convergence is reached, each agent supplies or consumes a quantity which leads
to its individual maximum profit, without the need of any communication. So
eventually, each agent supplies or consumes a quantity which leads to its
individual maximum profit, without communicating with any other agents. Our
simulations show the efficacy of this approach.Comment: 6 pages, 12 figures, IEEE International Conference on Systems, Man
and Cybernetic
- …