183,200 research outputs found

    Adaptive runtime techniques for power and resource management on multi-core systems

    Full text link
    Energy-related costs are among the major contributors to the total cost of ownership of data centers and high-performance computing (HPC) clusters. As a result, future data centers must be energy-efficient to meet the continuously increasing computational demand. Constraining the power consumption of the servers is a widely used approach for managing energy costs and complying with power delivery limitations. In tandem, virtualization has become a common practice, as virtualization reduces hardware and power requirements by enabling consolidation of multiple applications on to a smaller set of physical resources. However, administration and management of data center resources have become more complex due to the growing number of virtualized servers installed in data centers. Therefore, designing autonomous and adaptive energy efficiency approaches is crucial to achieve sustainable and cost-efficient operation in data centers. Many modern data centers running enterprise workloads successfully implement energy efficiency approaches today. However, the nature of multi-threaded applications, which are becoming more common in all computing domains, brings additional design and management challenges. Tackling these challenges requires a deeper understanding of the interactions between the applications and the underlying hardware nodes. Although cluster-level management techniques bring significant benefits, node-level techniques provide more visibility into application characteristics, which can then be used to further improve the overall energy efficiency of the data centers. This thesis proposes adaptive runtime power and resource management techniques on multi-core systems. It demonstrates that taking the multi-threaded workload characteristics into account during management significantly improves the energy efficiency of the server nodes, which are the basic building blocks of data centers. The key distinguishing features of this work are as follows: We implement the proposed runtime techniques on state-of-the-art commodity multi-core servers and show that their energy efficiency can be significantly improved by (1) taking multi-threaded application specific characteristics into account while making resource allocation decisions, (2) accurately tracking dynamically changing power constraints by using low-overhead application-aware runtime techniques, and (3) coordinating dynamic adaptive decisions at various layers of the computing stack, specifically at system and application levels. Our results show that efficient resource distribution under power constraints yields energy savings of up to 24% compared to existing approaches, along with the ability to meet power constraints 98% of the time for a diverse set of multi-threaded applications

    Economic Analysis of a Data Center Virtual Power Plant Participating in Demand Response

    Get PDF
    Data centers consume a significant amount of energy from the grid, and the number of data centers are increasing at a high rate. As the amount of demand on the transmission system increases, network congestion reduces the economic efficiency of the grid and begins to risk failure. Data centers have underutilized energy resources, such as backup generators and battery storage, which can be used for demand response (DR) to benefit both the electric power system and the data center. Therefore, data center energy resources, including renewable energy, are aggregated and controlled using an energy management system (EMS) to operate as a virtual power plant (VPP). The data center as a VPP participates in a day-ahead DR program to relieve network congestion and improve market efficiency. Data centers mostly use lead-acid batteries for energy reserve in Uninterruptible Power Supply (UPS) systems that ride through power fluctuations and short term power outages. These batteries are sized according to the power requirement of the data center and the backup power duration required for reliable operation of the data center. Most of the time, these batteries remain on float charge, with seldom charging and discharging cycles. Batteries have a limited float life, where at the end of the float life, the battery is assumed dead, and require replacement. Therefore, the unused energy of the battery can be utilized by allocating a daily energy budget limit without affecting the overall float life of the battery used in data center for the purpose of DR. This is incorporated as a soft constraint in the EMS model, and the extra use of battery energy over the daily budget limit will account for the wear cost of the battery. A case study is conducted in which the data center is placed on a modified version of the IEEE 30-bus test system to evaluate the potential economic savings by participating in the DR program, coordinated by the Independent System Operator (ISO). We show that the savings of the data center operating as a VPP and participating in the DR program far outweighs the additional expense due to operating its own generators and batteries

    A Secure Storage Management & Auditing Scheme for Cloud Storage

    Get PDF
    Cloud computing is an evolving domain that provides many on-demand services that are used by many businesses on daily basis. Massive growth in cloud storage results in new data centers which are hosted by a large number of servers. As number of data centers increases enormous amount of energy consumption also increases. Now cloud service providers are looking for environmental friendly alternatives to reduce energy consumption. Data storage requires huge amount of resources and management. Due to increasing amount of demand for data storage new frameworks needed to store and manage data at a low cost. Also to prevent data from unauthorized access cloud service provider must provide data access control. Data access control is an effective way to ensure data storage security within cloud. For data storage cost minimization we are using DCT compression technique to ensure data compression without compromising the quality of the data. For data access control and security asymmetric cryptographic algorithm RSA is used. For data auditing we have used MD5 with RSA to generate digital signatures, In proposed work we tried to cover all attributes in terms of efficiency, performance and security in cloud computing

    Energy efficiency of dynamic management of virtual cluster with heterogeneous hardware

    Get PDF
    Cloud computing is an essential part of today's computing world. Continuously increasing amount of computation with varying resource requirements is placed in large data centers. The variation among computing tasks, both in their resource requirements and time of processing, makes it possible to optimize the usage of physical hardware by applying cloud technologies. In this work, we develop a prototype system for load-based management of virtual machines in an OpenStack computing cluster. Our prototype is based on an idea of 'packing' idle virtual machines into special park servers optimized for this purpose. We evaluate the method by running real high-energy physics analysis software in an OpenStack test cluster and by simulating the same principle using the Cloudsim simulator software. The results show a clear improvement, 9-48 %, in the total energy efficiency when using our method together with resource overbooking and heterogeneous hardware.Peer reviewe

    A comprehensive review of a data centre for a cooling system

    Get PDF
    Cyber-Physical-Social Systems, commercial enterprises, and social networking use data centers to store, process, and distribute massive amounts of data. A data center serves as the foundation for all of these endeavors. The data center's workload and power consumption are increasing rapidly due to the demand for remote data services. Mechanical refrigeration and terminal cooling are the most critical components for most cooling systems. There is a way to transfer heat from the data center to the outside environment, but it's a complicated process. Air cooling systems and technology are most useful for room cooling and rack-level cooling. Because of their superior cooling performance and higher energy efficiency, air cooling has attracted more attention than water cooling in most existing data centers. The chillers and fans consume the most power of all the cooling equipment in the system. These methods can be divided into mechanism-based methods and data-driven methods for energy management of the cooling system. Operation management of cooling equipment is proposed to reduce power consumption, mainly using predictive model control and reinforcement learning-based methods. An overview of the data center's cooling system is presented in this paper, which focuses on the most common cooling solutions, power consumption modeling methods, and optimization control strategies, among others. In addition, the data center's cooling system is described as a current and future issue

    Virtual Machine Management for Efficient Cloud Data Centers with Applications to Big Data Analytics

    Get PDF
    Infrastructure-as-a-Service (IaaS) cloud data centers offer computing resources in the form of virtual machine (VM) instances as a service over the Internet. This allows cloud users to lease and manage computing resources based on the pay-as-you-go model. In such a scenario, the cloud users run their applications on the most appropriate VM instances and pay for the actual resources that are used. To support the growing service demands of end users, cloud providers are now building an increasing number of large-scale IaaS cloud data centers, consisting of many thousands of heterogeneous servers. The ever increasing heterogeneity of both servers and VMs requires efficient management to balance the load in the data centers and, more importantly, to reduce the energy consumption due to underutilized physical servers. To achieve these goals, the key aspect is to eliminate inefficiencies while using computing resources. This dissertation investigates the VM management problem for efficient IaaS cloud data centers. In particular, it considers VM placement and VM consolidation to achieve effective load balancing and energy efficiency in cloud infrastructures. VM placement allows cloud providers to allocate a set of requested or migrating VMs onto physical servers with the goal to balance the load or minimize the number of active servers. While addressing the VM placement problem is important, VM consolidation is even more important to enable continuous reorganization of already-placed VMs on the least number of servers. It helps create idle servers during periods of low resource utilization by taking advantage of live VM migration provided by virtualization technologies. Energy consumption is then reduced by dynamically switching idle servers into a power saving state. As VM migrations and server switches consume additional energy, the frequency of VM migrations and server switches needs to be limited as well. This dissertation concludes with a sample application of distributed computing to big data analytics

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio
    • …
    corecore