627 research outputs found

    Future Energy Efficient Data Centers With Disaggregated Servers

    Get PDF
    The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today's data centers can no longer support. This paper examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm: the disaggregated server (DS) data center design. The DS design arranges data centers resources in physical pools, such as processing, memory, and IO module pools, rather than packing each subset of such resources into a single server box. In this paper, we study energy efficient resource provisioning and virtual machine (VM) allocation in DS-based data centers compared to CS-based data centers. First, we present our new design for the photonic DS-based data center architecture, supplemented with a complete description of the architectural components. Second, we develop a mixed integer linear programming (MILP) model to optimize VM allocation for the DS-based data center, including the data center communication fabric power consumption. Our results indicate that, in DS data centers, the optimum allocation of pooled resources and their communication power yields up to 42% average savings in total power consumption when compared with the CS approach. Due to the MILP high computational complexity, we developed an energy efficient resource provisioning heuristic for DS with communication fabric (EERP-DSCF), based on the MILP model insights, with comparable power efficiency to the MILP model. With EERP-DSCF, we can extend the number of served VMs, where the MILP model scalability for a large number of VMs is challenging. Furthermore, we assess the energy efficiency of the DS design under stringent conditions by increasing the CPU to memory traffic and by including high noncommunication power consumption to determine the conditions at which the DS and CS designs become comparable in power consumption. Finally, we present a complete analysis of the communication patterns in our new DS design and some recommendations for design and implementation challenges

    System design approach to energy-efficient data centers

    Get PDF
    Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 63-65).Green HPC is the new standard for High Performance Computing (HPC). This has now become the primary interest among HPC researchers because of a renewed emphasis on Total Cost of Ownership (TCO) and the pursuit of higher performance. Quite simply, the cost of operating modern HPC equipment can rapidly outstrip the cost of acquisition. This phenomenon is recent and can be traced to the inadequacies in modern CPU and Datacenter systems design. This thesis analyzes the problem in its entirety and describe best practice fixes to solve the problems of energy-inefficient HPC.by Kurt Keville.S.M.in Engineering and Managemen

    Technology Assessment: NREL Provides Know-How for Highly Energy-Efficient Data Centers (Fact Sheet)

    Full text link

    Energy Efficient Service Delivery in Clouds in Compliance with the Kyoto Protocol

    Full text link
    Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry - especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes

    A QoS-aware workload routing and server speed scaling policy for energy-efficient data centers: a robust queueing theoretic approach

    Full text link
    Maintaining energy efficiency in large data centers depends on the ability to manage workload routing and control server speeds according to fluctuating demand. The use of dynamic algorithms often means that management has to install the complicated software or expensive hardware needed to communicate with routers and servers. This paper proposes a static routing and server speed scaling policy that may achieve energy efficiency similar to dynamic algorithms and eliminate the necessity of frequent communications among resources without compromising quality of service (QoS). We use a robust queueing approach to consider the response time constraints, e.g., service level agreements (SLAs). We model each server as a G/G/1G/G/1 processor sharing (PS) queue and use uncertainty sets to define the domain of random variables. A comparison with a dynamic algorithm shows that the proposed static policy provides competitive solutions in terms of energy efficiency and satisfactory QoS

    A series-stacked power delivery architecture with isolated converters for energy efficient data centers

    Get PDF
    While the Internet is spreading far and wide, the data centers located at the heart of maintaining uninterrupted access to this service continue to grow rapidly. The energy efficiency of data centers has become crucial in our energy limited world and therefore both power distribution and conversion methods for future data centers need to be reconsidered. In this thesis, alternative methods to achieve more power efficient DC power distribution and voltage regulation for future data centers are investigated. The conventional way of delivering DC power to a server rack in data centers generally contains a central converter that regulates a DC bus, which is typically at 380V rectified grid voltage. This 380V bus voltage is in turn fed to each server rack and one DC-DC converter per server then converts the DC bus voltage to a lower voltage, which is typically 12V. Since each DC-DC converter has to perform a large voltage step down, relatively high power losses are common. In order to avoid large voltage step down and the corresponding power losses of each server’s DC-DC converter, the concept of electrical series stacking of servers in a rack is proposed in this work. The concept of series stacking and active load balancing using differential power processing (DPP) ensures that only the power difference between servers needs to be processed. Therefore the amount of processed power, as well as the power lost during the conversion, is reduced in comparison to the conventional system, where each server’s DC-DC converter has to process all the power needed by the server. This results in a significant reduction in power conversion losses. The concept of series stacking and active voltage balancing by DPP is experimental validated. A DC power distribution system for a four-server rack is created. A control algorithm is developed for server to virtual bus differential power processing and the proposed solution is also supported with experimental results. This thesis presents an experimental demonstration of a series stacked server power delivery architecture with active voltage balancing for future data centers

    Energy Saving In Data Centers

    Get PDF
    Globally CO2 emissions attributable to Information Technology are on par with those resulting from aviation. Recent growth in cloud service demand has elevated energy efficiency of data centers to a critical area within green computing. Cloud computing represents a backbone of IT services and recently there has been an increase in high-definition multimedia delivery, which has placed new burdens on energy resources. Hardware innovations together with energy-efficient techniques and algorithms are key to controlling power usage in an ever-expanding IT landscape. This special issue contains a number of contributions that show that data center energy efficiency should be addressed from diverse vantage points. © 2017 by the authors. Licensee MDPI, Basel, Switzerland
    • …
    corecore