317 research outputs found

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    An energy optimization with improved QOS approach for adaptive cloud resources

    Get PDF
    In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (ACRR) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model ACRR in terms of average run time, power consumption and average power required than any other state-of-art techniques

    Energy-aware simulation with DVFS

    Get PDF
    International audienceIn recent years, research has been conducted in the area of large systems models, especially distributed systems, to analyze and understand their behavior. Simulators are now commonly used in this area and are becoming more complex. Most of them provide frameworks for simulating application scheduling in various Grid infrastructures, others are specifically developed for modeling networks, but only a few of them simulate energy-efficient algorithms. This article describes which tools need to be implemented in a simulator in order to support energy-aware experimentation. The emphasis is on DVFS simulation, from its implementation in the simulator CloudSim to the whole methodology adopted to validate its functioning. In addition, a scientific application is used as a use case in both experiments and simulations, where the close relationship between DVFS efficiency and hardware architecture is highlighted. A second use case using Cloud applications represented by DAGs, which is also a new functionality of CloudSim, demonstrates that the DVFS efficiency also depends on the intrinsic middleware behavior

    Energy-aware scheduling in distributed computing systems

    Get PDF
    Distributed computing systems, such as data centers, are key for supporting modern computing demands. However, the energy consumption of data centers has become a major concern over the last decade. Worldwide energy consumption in 2012 was estimated to be around 270 TWh, and grim forecasts predict it will quadruple by 2030. Maximizing energy efficiency while also maximizing computing efficiency is a major challenge for modern data centers. This work addresses this challenge by scheduling the operation of modern data centers, considering a multi-objective approach for simultaneously optimizing both efficiency objectives. Multiple data center scenarios are studied, such as scheduling a single data center and scheduling a federation of several geographically-distributed data centers. Mathematical models are formulated for each scenario, considering the modeling of their most relevant components such as computing resources, computing workload, cooling system, networking, and green energy generators, among others. A set of accurate heuristic and metaheuristic algorithms are designed for addressing the scheduling problem. These scheduling algorithms are comprehensively studied, and compared with each other, using statistical tools to evaluate their efficacy when addressing realistic workloads and scenarios. Experimental results show the designed scheduling algorithms are able to significantly increase the energy efficiency of data centers when compared to traditional scheduling methods, while providing a diverse set of trade-off solutions regarding the computing efficiency of the data center. These results confirm the effectiveness of the proposed algorithmic approaches for data center infrastructures.Los sistemas informĂĄticos distribuidos, como los centros de datos, son clave para satisfacer la demanda informĂĄtica moderna. Sin embargo, su consumo de energĂ©tico se ha convertido en una gran preocupaciĂłn. Se estima que mundialmente su consumo energĂ©tico rondĂł los 270 TWh en el año 2012, y algunos prevĂ©n que este consumo se cuadruplicarĂĄ para el año 2030. Maximizar simultĂĄneamente la eficiencia energĂ©tica y computacional de los centros de datos es un desafĂ­o crĂ­tico. Esta tesis aborda dicho desafĂ­o mediante la planificaciĂłn de la operativa del centro de datos considerando un enfoque multiobjetivo para optimizar simultĂĄneamente ambos objetivos de eficiencia. En esta tesis se estudian mĂșltiples variantes del problema, desde la planificaciĂłn de un Ășnico centro de datos hasta la de una federaciĂłn de mĂșltiples centros de datos geogrĂĄficmentea distribuidos. Para esto, se formulan modelos matemĂĄticos para cada variante del problema, modelado sus componentes mĂĄs relevantes, como: recursos computacionales, carga de trabajo, refrigeraciĂłn, redes, energĂ­a verde, etc. Para resolver el problema de planificaciĂłn planteado, se diseñan un conjunto de algoritmos heurĂ­sticos y metaheurĂ­sticos. Estos son estudiados exhaustivamente y su eficiencia es evaluada utilizando una baterĂ­a de herramientas estadĂ­sticas. Los resultados experimentales muestran que los algoritmos de planificaciĂłn diseñados son capaces de aumentar significativamente la eficiencia energĂ©tica de un centros de datos en comparaciĂłn con mĂ©todos tradicionales planificaciĂłn. A su vez, los mĂ©todos propuestos proporcionan un conjunto diverso de soluciones con diferente nivel de compromiso respecto a la eficiencia computacional del centro de datos. Estos resultados confirman la eficacia del enfoque algorĂ­tmico propuesto

    Cloud computing: survey on energy efficiency

    Get PDF
    International audienceCloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions

    On energy consumption of switch-centric data center networks

    Get PDF
    Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to esnure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power related metrics were derived and adapted for the information technology equipment (ITE) processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE), known to DCs. This study suggests that despite in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T
    • 

    corecore