9 research outputs found

    Die Behandlung von ĂĽber 600 schwerbrandverletzten Kindern in Bochum in 10 Jahren - retrospektive Analyse

    No full text

    Fast energy estimation framework for long-running applications

    No full text
    The computation power in data center facilities is increasing significantly. This brings with it an increase of power consumption in data centers. Techniques such as power budgeting or resource management are used in data centers to increase energy efficiency. These techniques require to know beforehand the energy consumption throughout a full profiling of the applications. This is not feasible in scenarios with long-running applications that have long execution times. To tackle this problem we present a fast energy estimation framework for long-running applications. The framework is able to estimate the dynamic CPU and memory energy of the application without the need to perform a complete execution. For that purpose, we leverage the concept of application signature. The application signature is a reduced version, in terms of execution time, of the original application. Our fast energy estimation framework is validated with a set of long-running applications and obtains RMS values of 11.4% and 12.8% for the CPU and memory energy estimation errors, respectively. We define the concept of Compression Ratio as an indicator of the acceleration of the energy estimation process. Our framework is able to obtain Compression Ratio values in the range of 10.1 to 191.2

    Using grammatical evolution techniques to model the dynamic power consumption of enterprise servers

    No full text
    The increasing demand for computational resources has led to a significant growth of data center facilities. A major concern has appeared regarding energy efficiency and consumption in servers and data centers. The use of flexible and scalable server power models is a must in order to enable proactive energy optimization strategies. This paper proposes the use of Evolutionary Computation to obtain a model for server dynamic power consumption. To accomplish this, we collect a significant number of server performance counters for a wide range of sequential and parallel applications, and obtain a model via Genetic Programming techniques. Our methodology enables the unsupervised generation of models for arbitrary server architectures, in a way that is robust to the type of application being executed in the server. With our generated models, we are able to predict the overall server power consumption for arbitrary workloads, outperforming previous approaches in the state-of-the-art

    Unsupervised power modeling of co-allocated workloads for energy efficiency in data centers

    No full text
    Data centers are huge power consumers and their energy consumption keeps on rising despite the efforts to increase energy efficiency. A great body of research is devoted to the reduction of the computational power of these facilities, applying techniques such as power budgeting and power capping in servers. Such techniques rely on models to predict the power consumption of servers. However, estimating overall server power for arbitrary applications when running co-allocated in multithreaded servers is not a trivial task. In this paper, we use Grammatical Evolution techniques to predict the dynamic power of the CPU and memory subsystems of an enterprise server using the hardware counters of each application. On top of our dynamic power models, we use fan and temperature-dependent leakage power models to obtain the overall server power. To train and test our models we use real traces from a presently shipping enterprise server under a wide set of sequential and parallel workloads running at various frequencies We prove that our model is able to predict the power consumption of two different tasks co-allocated in the same server, keeping error below 8W. For the first time in literature, we develop a methodology able to combine the hardware counters of two individual applications, and estimate overall server power consumption without running the co-allocated application. Our results show a prediction error below 12W, which represents a 7.3% of the overall server power, outperforming previous approaches in the state of the art
    corecore