18 research outputs found

    Energy Efficient Servers

    Get PDF
    Computer scienc

    Hierarchical Content Stores in High-speed ICN Routers: Emulation and Prototype Implementation

    Get PDF
    Recent work motivates the design of Information-centric rou-ters that make use of hierarchies of memory to jointly scale in the size and speed of content stores. The present paper advances this understanding by (i) instantiating a general purpose two-layer packet-level caching system, (ii) investigating the solution design space via emulation, and (iii) introducing a proof-of-concept prototype. The emulation-based study reveals insights about the broad design space, the expected impact of workload, and gains due to multi-threaded execution. The full-blown system prototype experimentally confirms that, by exploiting both DRAM and SSD memory technologies, ICN routers can sustain cache operations in excess of 10Gbps running on off-the-shelf hardware

    Energy Efficient Servers

    Get PDF
    Computer scienc

    Architecting Efficient Data Centers.

    Full text link
    Data center power consumption has become a key constraint in continuing to scale Internet services. As our society’s reliance on “the Cloud” continues to grow, companies require an ever-increasing amount of computational capacity to support their customers. Massive warehouse-scale data centers have emerged, requiring 30MW or more of total power capacity. Over the lifetime of a typical high-scale data center, power-related costs make up 50% of the total cost of ownership (TCO). Furthermore, the aggregate effect of data center power consumption across the country cannot be ignored. In total, data center energy usage has reached approximately 2% of aggregate consumption in the United States and continues to grow. This thesis addresses the need to increase computational efficiency to address this grow- ing problem. It proposes a new classes of power management techniques: coordinated full-system idle low-power modes to increase the energy proportionality of modern servers. First, we introduce the PowerNap server architecture, a coordinated full-system idle low- power mode which transitions in and out of an ultra-low power nap state to save power during brief idle periods. While effective for uniprocessor systems, PowerNap relies on full-system idleness and we show that such idleness disappears as the number of cores per processor continues to increase. We expose this problem in a case study of Google Web search in which we demonstrate that coordinated full-system active power modes are necessary to reach energy proportionality and that PowerNap is ineffective because of a lack of idleness. To recover full-system idleness, we introduce DreamWeaver, architectural support for deep sleep. DreamWeaver allows a server to exchange latency for full-system idleness, allowing PowerNap-enabled servers to be effective and provides a better latency- power savings tradeoff than existing approaches. Finally, this thesis investigates workloads which achieve efficiency through methodical cluster provisioning techniques. Using the popular memcached workload, this thesis provides examples of provisioning clusters for cost-efficiency given latency, throughput, and data set size targets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91499/1/meisner_1.pd

    Enhancing the programmability and energy efficiency of storage in hpc and virtualized environments

    Get PDF
    Mención Internacional en el título de doctorA decade ago computing systems hit a clock and power ceiling that places the energetic challenge among the most relevant issues in High Performance Computing (HPC). Motivated by the fact that computation is increasingly becoming cheaper than data movement in terms of power, our work studies and optimizes data movement across different levels of the software stack. We propose novel methodologies for analyzing, modeling, and optimizing the energy efficiency of data movement. More precisely, we propose methodologies to enhance the understanding of power consumption in the software I/O stack, and optimize the I/O energy efficiency in the operating system’s I/O stack, low-level CPU device drivers, and virtualized environments. Our experimental results show that through the understanding of the different operating system layers and their interaction, it is possible to develop novel coordination techniques that optimize the energy consumption and increase performance of I/O workloads. First, we develop a methodology for data collection, power and performance characterization, and modeling power usage in the I/O stack. Our work presents a detailed study of power and energy usage across all system components during various I/O-intensive workloads. We propose a data gathering methodology that combines software and hardware-based instrumentation in order to study I/O data movement, and develop novel power prediction models employing data analysis techniques. Second, this thesis presents novel CPU-level optimizations that improve the energy efficiency of I/O workloads. We address two issues present in modern processors: thermal imbalance causing performance variation and an inefficient use of CPU resources during I/O workloads. We develop novel techniques for power optimization and thermal efficiency through cross-layer coordination of CPU and I/O management. Third, we also focus on optimizing data sharing among virtual domains. In our work we refer to this as virtualized data sharing, which mainly differs from existing solutions by coordinating data flows through the software I/O stack. We develop a virtualized data sharing solution in order to reduce data movement among virtual environments, introducing new abstractions and mechanisms to more efficiently coordinate storage I/O.Hace una década, los computadores alcanzaron el límite físico de la frecuencia y potencia disipada, estableciendo el consumo energético como uno de los principales obstáculos en el campo de la computación de alto rendimiento. Motivados por el hecho de que la computación resulta cada vez menos costosa que el movimiento de datos en términos de energía, nuestro trabajo estudia y optimiza el movimiento de datos en varios niveles de la arquitectura software. En este trabajo proponemos nuevas metodologías para analizar, modelar y optimizar la eficiencia energética del movimiento de datos. Concretamente, proponemos metodologías para mejorar el análisis del consumo de potencia en la arquitectura software de E/S, así como optimizar la eficiencia energética de: la pila de E/S del sistema operativo, controladores de la CPU y entornos virtuales de E/S. Los resultados experimentales muestran que, mediante la comprensión de la interacción de las capas del sistema operativo, es posible desarrollar nuevas técnicas de coordinación que optimicen el consumo energético e incrementen el rendimiento de las cargas de trabajo de E/S. En primer lugar desarrollamos una metodología para la recolección de datos y la caracterización del rendimiento y consumo de potencia en la pila de E/S. Nuestro trabajo presenta un estudio detallado del consumo energético y de potencia de cada uno de los componentes del sistema durante la ejecución de cargas de trabajo de E/S. Concretamente proponemos una metodología de captura de datos que combina instrumentación hardware y software para estudiar el movimiento de datos, con el fin de desarrollar nuevos modelos de predicción de consumo empleando técnicas de análisis de datos. En segundo lugar, esta Tesis Doctoral presentamos nuevas optimizaciones a nivel de CPU que mejoran la eficiencia energética de las cargas de trabajo de E/S. Para ello consideramos dos problemas fundamentales en los procesadores modernos: el desequilibrio térmico que causa variablidad de rendimiento y el uso ineficiente de los recursos de la CPU durante cargas de trabajo de E/S. Además desarrollamos nuevas técnicas que optimizan la eficiencia energética a través de la coordinación entre las distintas capas del sistema operativo que gestionan CPU y la E/S. En tercer lugar, también centramos este trabajo en la optimización del intercambio de datos entre dominios virtuales. En nuestro trabajo nos referimos a esto como el intercambio de datos virtualizado, que se diferencia principalmente de las soluciones existentes mediante la coordinación de los flujos de datos mediante la cooperación entre distintos dominios virtuales. Para ello desarrollamos una solución de intercambio de datos que minimiza la copia de datos con el fin de reducir el movimiento de datos, e introducimos nuevas abstracciones y mecanismos para coordinar de manera más eficiente el almacenamiento de E/S en entornos virtuales.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Laurent Lefevre.- Vocal: Arturo González Escriban

    Thermal Energy Storage for Datacenters with Phase Change Materials

    Full text link
    Datacenters, vast warehouses containing millions of servers that run the internet and the cloud, have experienced double digit growth for almost two decades. Datacenters cost hundreds of millions of dollars, with the largest now exceeding over a billion dollars each, and consume enormous amounts of power–over 2% of all electricity in the US and projected to increase up to 10% by 2030. The impact of such high compute density, with thousands of individual compute nodes packed together in a small space, is heat: every watt of power used by servers must be removed form the datacenter. This requires active cooling: air cooling is by far the most common with an air conditioner or other form of heat exchanger cooling air in the datacenter room then transporting heat outside the facility to heat exchanger or similar fixture. Such a system is simple, common, and functional, but inherently inefficient due to the nature of datacenter workloads. Datacenters primarily server user facing workloads, that is: the user requests a search or sends and email and their query prompts load in the datacenter. The query is handled locally, on a relative geographic scale, to provide a low response time and positive user experience. This necessitates globally distributed datacenter capacity, but also creates a diurnal load pattern whereby datacenters are most heavily loaded during the peak hours when users in their region of service are awake and active online versus the off hours when users are offline or asleep and query requests are low. Because datacenter infrastructure must be provisioned for peak load, servers, power distribution, and cooling infrastructure is significantly underutilized most of the time. This dissertation investigates the cooling needs of datacenters, and proposes to decouple the work and cooling needs. Specifically, we hypothesize that by storing thermal energy we can reshape the thermal profile of a datacenter to better balance cooling load throughout the day. We call this technique Thermal Time Shifting (TTS). First, we discuss how phase change materials (PCMs) enable TTS and evaluate the potential use scenarios of placing a small amount of PCM inside of servers for thermal energy storage. Next we dive deeper into the potential of thermal energy storage and propose Virtual Melting Temperatures (VMT), a technique that uses active job placement to control the melting and cooling of PCM to enable a much greater degree of control over the behavior of the thermal profile. Finally we propose and evaluate Thermal Gradient Transfer (TGT), a technique that uses direct water cooling to move heat straight from CPUs and GPUs to the wax for wider applicability and greater peak cooling load reduction.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/147726/1/skachm_1.pdfDescription of skachm_1.pdf : Restricted to UM users only

    Green IS in Infrastructure Software

    Get PDF
    As the world is becoming a more connected place, organizations become more dependent on infrastructure software such as operating systems and middleware. Infrastructure software and the hardware it is operated on consumes a lot of electricity and in a world where the climate threat is increasingly imminent, aspects of Green IS are more relevant than ever. There are a lot of research done on the characteristics of Green IS but not so much on what is practically adopted, especially not within organizations whose main industry is not IT. In this study, we examine to what extent retail and manufacturing organizations adopt aspects of Green IS to increase their impact on environmental sustainability. Four infrastructure software platforms were surveyed through four group interviews with a total of 25 participants, on their platform’s adoption of five Green IS aspects. We found that virtualization and cloud computing as well as efficiency and optimization are well adopted aspects, where automation and monitoring and KPIs are not as prominent. The last aspect, data growth management, was in all cases very little or not at all adopted

    Consensus protocols exploiting network programmability

    Get PDF
    Services rely on replication mechanisms to be available at all time. The service demanding high availability is replicated on a set of machines called replicas. To maintain the consistency of replicas, a consensus protocol such as Paxos or Raft is used to synchronize the replicas' state. As a result, failures of a minority of replicas will not affect the service as other non-faulty replicas continue serving requests. A consensus protocol is a procedure to achieve an agreement among processors in a distributed system involving unreliable processors. Unfortunately, achieving such an agreement involves extra processing on every request, imposing a substantial performance degradation. Consequently, performance has long been a concern for consensus protocols. Although many efforts have been made to improve consensus performance, it continues to be an important problem for researchers. This dissertation presents a novel approach to improving consensus performance. Essentially, it exploits the programmability of a new breed of network devices to accelerate consensus protocols that traditionally run on commodity servers. The benefits of using programmable network devices to run consensus protocols are twofold: The network switches process packets faster than commodity servers and consensus messages travel fewer hops in the network. It means that the system throughput is increased and the latency of requests is reduced. The evaluation of our network-accelerated consensus approach shows promising results. Individual components of our FPGA- based and switch-based consensus implementations can process 10 million and 2.5 billion consensus messages per second, respectively. Our FPGA-based system as a whole delivers 4.3 times performance of a traditional software consensus implementation. The latency is also better for our system and is only one third of the latency of the software consensus implementation when both systems are under half of their maximum throughputs. In order to drive even higher performance, we apply a partition mechanism to our switch-based system, leading to 11 times better throughput and 5 times better latency. By dynamically switching between software-based and network-based implementations, our consensus systems not only improve performance but also use energy more efficiently. Encouraged by those benefits, we developed a fault-tolerant non-volatile memory system. A prototype using software memory controller demonstrated reasonable overhead over local memory access, showing great promise as scalable main memory. Our network-based consensus approach would have a great impact in data centers. It not only improves performance of replication mechanisms which relied on consensus, but also enhances performance of services built on top of those replication mechanisms. Our approach also motivates others to move new functionalities into the network, such as, key-value store and stream processing. We expect that in the near future, applications that typically run on traditional servers will be folded into networks for performance
    corecore