8 research outputs found

    Towards Energy-Proportional Computing for Enterprise-Class Server Workloads

    Get PDF
    Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Adding to the problem is the inability of the servers to exhibit energy proportionality, i.e., provide energy-ecient execution under all levels of utilization, which diminishes the overall energy eciency of the data center. It is imperative that we realize eective strategies to control the power consumption of the server and improve the energy eciency of data centers. With the advent of Intel Sandy Bridge processors, we have the ability to specify a limit on power consumption during runtime, which creates opportunities to design new power-management techniques for enterprise workloads and make the systems that they run on more energy-proportional. In this paper, we investigate whether it is possible to achieve energy proportionality for an enterprise-class server workload, namely SPECpower ssj2008 benchmark, by using Intel's Running Average Power Limit (RAPL) interfaces. First, we analyze the power consumption and characterize the instantaneous power prole of the SPECpower benchmark at a subsystem-level using the on-chip energy meters exposed via the RAPL interfaces. We then analyze the impact of RAPL power limiting on the performance, per-transaction response time, power consumption, and energy eciency of the benchmark under dierent load levels. Our observations and results shed light on the ecacy of the RAPL interfaces and provide guidance for designing power-management techniques for enterprise-class workloads

    On the Energy Proportionality of Distributed NoSQL Data Stores

    Full text link

    Server Workload Model Identification: Monitoring and Control Tools for Linux, Journal of Telecommunications and Information Technology, 2016, nr 2

    Get PDF
    Server power control in data centers is a coordinated process carefully designed to reach multiple data center management objectives. The main objectives include avoiding power capacity overloads and system overheating, as well as fulfilling service-level agreements (SLAs). In addition to the primary goals, server control process aims to maximize various energy efficiency metrics subject to reliability constraints. Monitoring of data center performance is fundamental for its efficient management. In order to keep track of how well the computing tasks are processed, cluster control systems need to collect accurate measurements of activities of cluster components. This paper presents a brief overview of performance and power consumption monitoring tools available in the Linux systems

    Data-Oriented Characterization of Application-Level Energy Optimization

    Full text link
    Abstract. Empowering application programmers to make energy-aware decisions is a critical dimension of energy optimization for computer systems. In this paper, we study the energy impact of alternative data management choices by programmers, such as data access patterns, data precision choices, and data organization. Second, we attempt to build a bridge between application-level energy management and hardware-level energy management, by elucidating how various application-level data management features respond to Dynamic Voltage and Frequency Scal-ing (DVFS). Finally, we apply our findings to real-world applications, demonstrating their potential for guiding application-level energy opti-mization. The empirical study is particularly relevant in the Big Data era, where data-intensive applications are large energy consumers, and their energy efficiency is strongly correlated to how data are maintained and handled in programs

    Energy Efficiency in High Throughput Computing Tools, techniques and experi- ments

    Get PDF
    The volume of data to process and store in high throughput computing (HTC) and scientific computing continues increasing many-fold every year. Consequently, the energy consumption of data centers and similar facilities is raising economical and environmental concerns. Thus, it is of paramount importance to improve energy efficiency in such environments. This thesis focuses on understanding how to improve energy efficiency in scientific computing and HTC. For this purpose we conducted research on tools and techniques to measure power consumption. We also conducted experiments to understand if low-energy processing architectures are suitable for HTC and compared the energy efficiency of ARM and Intel ar- chitectures under authentic scientific workloads. Finally, we used the results to develop an algorithm that schedules tasks among ARM and Intel machines in a dynamic electricity pricing market in order to optimally lower the overall electric- ity bill. Our contributions are three-fold: The results of the study indicate that ARM has potential for being used in scientific and HTC from an energy efficiency perspective; We also outlined a set of tools and techniques to accurately measure energy consumption at the different levels of the computing systems; In addiciton, the developed scheduling algorithm shows potential savings in the electrical bill when applied to heterogeneous data centers working under a dynamic electricity pricing market

    JTIT

    Get PDF
    kwartalni
    corecore