447 research outputs found

    Optimising for energy or robustness? Trade-offs for VM consolidation in virtualized datacenters under uncertainty

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11590-016-1065-xReducing the energy consumption of virtualized datacenters and the Cloud is very important in order to lower CO2 footprint and operational cost of a Cloud operator. However, there is a trade-off between energy consumption and perceived application performance. In order to save energy, Cloud operators want to consolidate as many Virtual Machines (VM) on the fewest possible physical servers, possibly involving overbooking of resources. However, that may involve SLA violations when many VMs run on peak load. Such consolidation is typically done using VM migration techniques, which stress the network. As a consequence, it is important to find the right balance between the energy consumption and the number of migrations to perform. Unfortunately, the resources that a VM requires are not precisely known in advance, which makes it very difficult to optimise the VM migration schedule. In this paper, we therefore propose a novel approach based on the theory of robust optimisation. We model the VM consolidation problem as a robust Mixed Integer Linear Program and allow to specify bounds for e.g. resource requirements of the VMs. We show that, by using our model, Cloud operators can effectively trade-off uncertainty of resource requirements with total energy consumption. Also, our model allows us to quantify the price of the robustness in terms of energy saving against resource requirement violations.Peer ReviewedPostprint (author's final draft

    GreenMail: Reducing Email Service's Carbon Emission with Minimum Cost

    Full text link
    Internet services contribute a large fraction of worldwide carbon emission nowadays, in a context of increasing number of companies tending to provide and more and more developers use Internet services. Noticeably, a trend is those service providers are trying to reduce their carbon emissions by utilizing on-site or off-site renewable energy in their datacenters in order to attract more customers. With such efforts have been paid, there are still some users who are aggressively calling for even cleaner Internet services. For example, over 500,000 Facebook users petitioned the social networking site to use renewable energy to power its datacenter. However, it seems impossible for such demand to be satisfied merely from the inside of those production datacenters, considering the transition cost and stability. Outside the existing Internet services, on the other hand, may easily set up a proxy service to attract those renewable-energy-sensitive users, by 1) using carbon neutral or even over-offsetting cloud instances to bridge the end user and traditional Internet services; and 2) estimating and offsetting the carbon emissions from the traditional Internet services. In our paper, we proposed GreenMail, which is a general IMAP proxy caching system that connects email users and traditional email services. GreenMail runs on green web hosts to cache users' emails on green cloud instances. Besides, it offsets the carbon emitted by traditional backend email services. With GreenMail, users could set a carbon emission constraint and use traditional email service without breaking any code modification of user side and email server side.Comment: Master's Thesi

    MultiGreen: Cost-Minimizing Multi-source Datacenter Power Supply with Online Control

    Get PDF
    Session 4: Data Center Energy ManagementFulltext of the conference paper in: http://conferences.sigcomm.org/eenergy/2013/papers/p13.pdfFaced by soaring power cost, large footprint of carbon emis- sion and unpredictable power outage, more and more mod- ern Cloud Service Providers (CSPs) begin to mitigate these challenges by equipping their Datacenter Power Supply Sys- tem (DPSS) with multiple sources: (1) smart grid with time- varying electricity prices, (2) uninterrupted power supply (UPS) of finite capacity, and (3) intermittent green or re- newable energy. It remains a significant challenge how to operate among multiple power supply sources in a comple- mentary manner, to deliver reliable energy to datacenter users over time, while minimizing a CSP’s operational cost over the long run. This paper proposes an efficient, online control algorithm for DPSS, called MultiGreen. MultiGreen is based on an innovative two-timescale Lyapunov optimiza- tion technique. Without requiring a priori knowledge of system statistics, MultiGreen allows CSPs to make online decisions on purchasing grid energy at two time scales (in the long-term market and in the real-time market), leveraging renewable energy, and opportunistically charging and dis- charging UPS, in order to fully leverage the available green energy and low electricity prices at times for minimum op- erational cost. Our detailed analysis and trace-driven sim- ulations based on one-month real-world data have demon- strated the optimality (in terms of the tradeoff between min- imization of DPSS operational cost and satisfaction of data- center availability) and stability (performance guarantee in cases of fluctuating energy demand and supply) of Multi- Green

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Carbon Responder: Coordinating Demand Response for the Datacenter Fleet

    Full text link
    The increasing integration of renewable energy sources results in fluctuations in carbon intensity throughout the day. To mitigate their carbon footprint, datacenters can implement demand response (DR) by adjusting their load based on grid signals. However, this presents challenges for private datacenters with diverse workloads and services. One of the key challenges is efficiently and fairly allocating power curtailment across different workloads. In response to these challenges, we propose the Carbon Responder framework. The Carbon Responder framework aims to reduce the carbon footprint of heterogeneous workloads in datacenters by modulating their power usage. Unlike previous studies, Carbon Responder considers both online and batch workloads with different service level objectives and develops accurate performance models to achieve performance-aware power allocation. The framework supports three alternative policies: Efficient DR, Fair and Centralized DR, and Fair and Decentralized DR. We evaluate Carbon Responder polices using production workload traces from a private hyperscale datacenter. Our experimental results demonstrate that the efficient Carbon Responder policy reduces the carbon footprint by around 2x as much compared to baseline approaches adapted from existing methods. The fair Carbon Responder policies distribute the performance penalties and carbon reduction responsibility fairly among workloads

    Energy-aware scheduling in distributed computing systems

    Get PDF
    Distributed computing systems, such as data centers, are key for supporting modern computing demands. However, the energy consumption of data centers has become a major concern over the last decade. Worldwide energy consumption in 2012 was estimated to be around 270 TWh, and grim forecasts predict it will quadruple by 2030. Maximizing energy efficiency while also maximizing computing efficiency is a major challenge for modern data centers. This work addresses this challenge by scheduling the operation of modern data centers, considering a multi-objective approach for simultaneously optimizing both efficiency objectives. Multiple data center scenarios are studied, such as scheduling a single data center and scheduling a federation of several geographically-distributed data centers. Mathematical models are formulated for each scenario, considering the modeling of their most relevant components such as computing resources, computing workload, cooling system, networking, and green energy generators, among others. A set of accurate heuristic and metaheuristic algorithms are designed for addressing the scheduling problem. These scheduling algorithms are comprehensively studied, and compared with each other, using statistical tools to evaluate their efficacy when addressing realistic workloads and scenarios. Experimental results show the designed scheduling algorithms are able to significantly increase the energy efficiency of data centers when compared to traditional scheduling methods, while providing a diverse set of trade-off solutions regarding the computing efficiency of the data center. These results confirm the effectiveness of the proposed algorithmic approaches for data center infrastructures.Los sistemas informáticos distribuidos, como los centros de datos, son clave para satisfacer la demanda informática moderna. Sin embargo, su consumo de energético se ha convertido en una gran preocupación. Se estima que mundialmente su consumo energético rondó los 270 TWh en el año 2012, y algunos prevén que este consumo se cuadruplicará para el año 2030. Maximizar simultáneamente la eficiencia energética y computacional de los centros de datos es un desafío crítico. Esta tesis aborda dicho desafío mediante la planificación de la operativa del centro de datos considerando un enfoque multiobjetivo para optimizar simultáneamente ambos objetivos de eficiencia. En esta tesis se estudian múltiples variantes del problema, desde la planificación de un único centro de datos hasta la de una federación de múltiples centros de datos geográficmentea distribuidos. Para esto, se formulan modelos matemáticos para cada variante del problema, modelado sus componentes más relevantes, como: recursos computacionales, carga de trabajo, refrigeración, redes, energía verde, etc. Para resolver el problema de planificación planteado, se diseñan un conjunto de algoritmos heurísticos y metaheurísticos. Estos son estudiados exhaustivamente y su eficiencia es evaluada utilizando una batería de herramientas estadísticas. Los resultados experimentales muestran que los algoritmos de planificación diseñados son capaces de aumentar significativamente la eficiencia energética de un centros de datos en comparación con métodos tradicionales planificación. A su vez, los métodos propuestos proporcionan un conjunto diverso de soluciones con diferente nivel de compromiso respecto a la eficiencia computacional del centro de datos. Estos resultados confirman la eficacia del enfoque algorítmico propuesto
    corecore