8 research outputs found
Shutdown Policies with Power Capping for Large Scale Computing Systems
International audienceLarge scale distributed systems are expected to consume huge amounts of energy. To solve this issue, shutdown policies constitute an appealing approach able to dynamically adapt the resource set to the actual workload. However, multiple constraints have to be taken into account for such policies to be applied on real infrastructures, in particular the time and energy cost of shutting down and waking up nodes, and power capping to avoid disruption of the system. In this paper, we propose models translating these various constraints into different shutdown policies that can be combined. Our models are validated through simulations on real workload traces and power measurements on real testbeds.
Recommended from our members
How Much Energy Can Green HPC Cloud Users Save?
Cloud computing has become an attractive and easy-To-use solution for users who want to externalize the run of their applications. However, data centers hosting cloud systems consume enormous amounts of energy. Reducing this consumption becomes an urgent challenge with the rapid growth of cloud utilization. In this paper, we explore a way for energy-Aware HPC cloud users to reduce their footprint on cloud infrastructures by reducing the size of the virtual resources they are asking for. We study the influence of green users on the system energy consumption and compare it with the consumption of more aggressive users in terms of resource utilization. We found that larger resources are more energy demanding even if they are faster in executing the applications. But, reducing too much the resources' size is also not beneficial for the energy consumption. A tradeoff lies in between these two options
Survey of network metrology platforms
Internet services rely on communication networks to serve billions of end-users worldwide. Metrology platforms are used to assess the Quality of Service of such networks. This work reviews and classifies the existing metrology equipment and methods. In particular, we provide elements of comparison between different packet capture techniques and clock synchronisation methods, two essential building blocks of metrology platforms. © 2012 IEEE
The SAGITTA approach for optimizing solar energy consumption in distributed clouds with stochastic modeling
International audienceFacing the urgent need to decrease data centers' energy consumption, Cloud providers resort to on-site renewable energy production. Solar energy can thus be used to power data centers. Yet this energy production is intrinsically uctuating over time and depending on the geographical location. In this paper, we propose a stochastic modeling for optimizing solar energy consumption in distributed clouds. Our approach, named SAGITTA (Stochastic Approach for Green consumption In disTributed daTA centers), is shown to produce a virtual machine scheduling close to the optimal algorithm in terms of energy savings and to outperform classical round-robin approaches over varying Cloud workloads and real solar energy generation traces
On the Energy Efficiency of Sleeping and Rate Adaptation for Network Devices
Best Paper AwardInternational audienceThe ever-growing appetite of Internet applications for network resources has led to an unprecedented electricity bill for these telecommunication infrastructures. Several techniques have been developed to improve the energy consumption of network devices. As their utilization highly varies over time, the two main techniques for saving energy, namely sleeping and rate adaptation, exploits the lower work-load periods to either put to sleep some hardware elements or adapt the network rate to the actual traffic level. In this paper, we compare two emblematic approaches of these energy-efficient techniques: Low Power Idle and Adaptive Link Rate. Our simulation-based study quantifies the reachable energy savings of these two approaches depending on the traffic characteristics. We show that, with little impact on the Quality of Service and consequent energy savings, Low Power Idle has a clear advantage. On the contrary, ALR is almost always consuming more than LPI and can reach unacceptable QoS levels. We also show that they can be combined to achieve better energy-efficiency, but at the cost of important QoS degradation
Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows
International audienceWhile distributed computing infrastructures can provide infrastructure level techniques for managing energy consumption, application level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measurements. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of waiting for these operations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world executions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy-efficient workflow scheduling literature