374 research outputs found
Reducing Electricity Demand Charge for Data Centers with Partial Execution
Data centers consume a large amount of energy and incur substantial
electricity cost. In this paper, we study the familiar problem of reducing data
center energy cost with two new perspectives. First, we find, through an
empirical study of contracts from electric utilities powering Google data
centers, that demand charge per kW for the maximum power used is a major
component of the total cost. Second, many services such as Web search tolerate
partial execution of the requests because the response quality is a concave
function of processing time. Data from Microsoft Bing search engine confirms
this observation.
We propose a simple idea of using partial execution to reduce the peak power
demand and energy cost of data centers. We systematically study the problem of
scheduling partial execution with stringent SLAs on response quality. For a
single data center, we derive an optimal algorithm to solve the workload
scheduling problem. In the case of multiple geo-distributed data centers, the
demand of each data center is controlled by the request routing algorithm,
which makes the problem much more involved. We decouple the two aspects, and
develop a distributed optimization algorithm to solve the large-scale request
routing problem. Trace-driven simulations show that partial execution reduces
cost by for one data center, and by for geo-distributed
data centers together with request routing.Comment: 12 page
A service broker for Intercloud computing
This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments
Carbon Responder: Coordinating Demand Response for the Datacenter Fleet
The increasing integration of renewable energy sources results in
fluctuations in carbon intensity throughout the day. To mitigate their carbon
footprint, datacenters can implement demand response (DR) by adjusting their
load based on grid signals. However, this presents challenges for private
datacenters with diverse workloads and services. One of the key challenges is
efficiently and fairly allocating power curtailment across different workloads.
In response to these challenges, we propose the Carbon Responder framework.
The Carbon Responder framework aims to reduce the carbon footprint of
heterogeneous workloads in datacenters by modulating their power usage. Unlike
previous studies, Carbon Responder considers both online and batch workloads
with different service level objectives and develops accurate performance
models to achieve performance-aware power allocation. The framework supports
three alternative policies: Efficient DR, Fair and Centralized DR, and Fair and
Decentralized DR. We evaluate Carbon Responder polices using production
workload traces from a private hyperscale datacenter. Our experimental results
demonstrate that the efficient Carbon Responder policy reduces the carbon
footprint by around 2x as much compared to baseline approaches adapted from
existing methods. The fair Carbon Responder policies distribute the performance
penalties and carbon reduction responsibility fairly among workloads
Green-Aware Virtual Machine Migration Strategy in Sustainable Cloud Computing Environments
As cloud computing develops rapidly, the energy consumption of large-scale datacenters becomes unneglectable, and thus renewable energy is considered as the extra supply for building sustainable cloud infrastructures. In this chapter, we present a green-aware virtual machine (VM) migration strategy in such datacenters powered by sustainable energy sources, considering the power consumption of both IT functional devices and cooling devices. We define an overall optimization problem from an energy-aware point of view and try to solve it using statistical searching approaches. The purpose is to utilize green energy sufficiently while guaranteeing the performance of applications hosted by the datacenter. Evaluation experiments are conducted under realistic workload traces and solar energy generation data in order to validate the feasibility. Results show that the green energy utilization increases remarkably, and more overall revenues could be achieved
Recommended from our members
Energy-Aware Algorithms for Greening Internet-Scale Distributed Systems Using Renewables
Internet-scale Distributed Systems (IDSs) are large distributed systems that are comprised of hundreds of thousands of servers located in hundreds of data centers around the world. A canonical example of an IDS is a content delivery network (CDN) that delivers content to users from a large global deployment of servers around the world. IDSs consume large amounts of energy and their energy requirements are projected to increase significantly in the future. With carbon emissions from data centers increasing every year, use of renewables to power data centers is critical for the sustainability of data centers and for the environment.
In this thesis we design energy-aware algorithms that leverage renewable sources of energy and study their potential to reduce brown energy consumption in IDSs. Firstly, we study the use of renewable solar energy to power IDS data centers. A net-zero IDS produces as much energy from renewables (green energy) as it needs to entirely off-set its energy consumption. We develop effective algorithms to help minimize the number of solar panels provisioned for net-zero IDSs. We empirically evaluate our algorithms using load traces from Akamai\u27s global CDN and solar data from PVWatts. Our results show that for net-zero year, net-zero month, and net-zero week, our optimal algorithm can reduce the number of panels by 36%, 68%, and 82% respectively, thereby making sustainability of IDSs significantly more achievable.
IDSs consume a significant amount of energy for cooling their infrastructure. Therefore, next, we study the potential benefits of using open air cooling (OAC) to reduce the energy usage as well as the capital costs incurred by an IDS for cooling. We develop an algorithm to incorporate OAC into the IDS architecture and empirically evaluate its efficacy using extensive work load traces from Akamai\u27s global CDN and global weather data from NOAA. Our results show that by using OAC, a global IDS can extract a 51% cooling energy reduction during summers and a 92% reduction in the winter.
Finally, we study the greening potential of combining two contrasting sources of renewable energy, namely solar energy and open air cooling (OAC). OAC involves the use of outside air to cool data centers if the weather outside is sufficiently cold and dry. Therefore OAC is likely to be abundant in colder weather and at night-time. In contrast, solar energy generation is correlated with sunny weather and day-time. Given their contrasting natures, we study whether synthesizing these two renewable sources of energy can yield complementary benefits. Given the intermittent nature of renewable energy, we use energy storage and load shifting to facilitate the use of green energy and study trade-offs in brown energy reduction based on key parameters like battery size, number of solar panels, and radius of load movement. We do a detailed cost analysis, including amortized cost savings as well as a break-even analysis for different energy prices. Our results show that we can significantly reduce brown energy consumption by about 55% to 59% just by combining the two technologies. We can increase our savings further to between 60% to 65% by adding load movement within a radius of 5000kms, and to between 73% to 89% by adding energy storage
Artificial Intelligence and Machine Learning Approaches to Energy Demand-Side Response: A Systematic Review
Recent years have seen an increasing interest in Demand Response (DR) as a means to provide flexibility, and hence improve the reliability of energy systems in a cost-effective way. Yet, the high complexity of the tasks associated with DR, combined with their use of large-scale data and the frequent need for near real-time de-cisions, means that Artificial Intelligence (AI) and Machine Learning (ML) — a branch of AI — have recently emerged as key technologies for enabling demand-side response. AI methods can be used to tackle various challenges, ranging from selecting the optimal set of consumers to respond, learning their attributes and pref-erences, dynamic pricing, scheduling and control of devices, learning how to incentivise participants in the DR schemes and how to reward them in a fair and economically efficient way. This work provides an overview of AI methods utilised for DR applications, based on a systematic review of over 160 papers, 40 companies and commercial initiatives, and 21 large-scale projects. The papers are classified with regards to both the AI/ML algorithm(s) used and the application area in energy DR. Next, commercial initiatives are presented (including both start-ups and established companies) and large-scale innovation projects, where AI methods have been used for energy DR. The paper concludes with a discussion of advantages and potential limitations of reviewed AI techniques for different DR tasks, and outlines directions for future research in this fast-growing area
- …