6 research outputs found

    Twisting Web Pages for Saving Energy

    Full text link

    Modeling Power Consumption of Applications Software Running on Servers

    Get PDF
    Reducing power consumption in computational processes is important to software devel- opers. Ideally, a tremendous amount of software design efforts goes into considerations that are critical to power efficiencies of computer systems. Sometimes, software is designed by a high-level developer not aware of underlying physical components of the system architecture, which can be exploited. Furthermore, even if a developer is aware, they design software geared towards mass end-user adoption and thus go for cross-compatibility. The challenge for the soft- ware designer is to utilize dynamic hardware adaptations. Dynamic hardware adaptations make it possible to reduce power consumption and overall chip temperature by reducing the amount of available performance. However these adaptations generally rely on input from temperature sensors, and due to thermal inertia in microprocessor packaging, the detection of temperature changes significantly lag the power events that caused them. This work provides energy performance evaluation and power consumption estimation of ap- plications running on a server using performance counters. Counter data of various performance indicators are collected using the CollectD tool. Simultaneously, during the test, a Power Meter (TED5000) is used to monitor the actual power drawn by the computer server. Furthermore, stress tests are performed to examine power fluctuations in response to the performance counts of four hardware subsystems: CPU, memory, disk, and network interface. A neural network model (NNM) and a linear polynomial model (LPM) have been developed based on process count information gathered by CollectD. These two models have been validated by four different scenarios running on three different platforms (three real servers.) Our experimental results show that system power consumption can be estimated with an average mean absolute error (MAE) between 11% to 15% on new system servers. While on old system servers, the average MAE is between 1% to 4%. Also, we find that NNM has better estimation results than the LPM, resulting in 1.5% reduction in MAE of energy estimation when compared to the LPM. The detailed contributions of the thesis are as follows: (i) develop a non-exclusive test bench to measure the power consumption of an application running on a server; (ii) provide a practical approach to extracting system performance counters and simplifying them to get the model pa- rameters; (iii) a modeling procedure is proposed and implemented for predicting the power cost of application software using performance counters. All of our contributions and the proposed procedure have been validated with numerous measurements on a real test bench. The results of this work can be used by application developers to make implementation-level decisions that affect the energy efficiency of software applications

    A quantitative survey of the power saving potential in IP-Over-WDM backbone networks

    Get PDF
    The power consumption in Information and Communication Technologies networks is growing year by year; this growth presents challenges from technical, economic, and environmental points of view. This has lead to a great number of research publications on "green" telecommunication networks. In response, a number of survey works have appeared as well. However, with respect to backbone networks, most survey works: 1) do not allow for an easy cross validation of the savings reported in the various works and 2) nor do they provide a clear overview of the individual and combined power saving potentials. Therefore, in this paper, we survey the reported saving potential in IP-over-WDM backbone telecommunication networks across the existing body of research in that area. We do this by mapping more than ten different approaches to a concise analytical model, which allows us to estimate the combined power reduction potential. Our estimates indicate that the power reduction potential of the once-only approaches is 2.3x in a Moderate Effort scenario and 31x in a Best Effort scenario. Factoring in the historic and projected yearly efficiency improvements ("Moore's law") roughly doubles both values on a ten-year horizon. The large difference between the outcome of Moderate Effort and Best Effort scenarios is explained by the disparity and lack of clarity of the reported saving results and by our (partly) subjective assessment of the feasibility of the proposed approaches. The Moderate Effort scenario will not be sufficient to counter the projected traffic growth, although the Best Effort scenario indicates that sufficient potential is likely available. The largest isolated power reduction potential is available in improving the power associated with cooling and power provisioning and applying sleep modes to overdimensioned equipment

    Transferring big data across the globe

    Get PDF
    Transmitting data via the Internet is a routine and common task for users today. The amount of data being transmitted by the average user has dramatically increased over the past few years. Transferring a gigabyte of data in an entire day was normal, however users are now transmitting multiple gigabytes in a single hour. With the influx of big data and massive scientific data sets that are measured in tens of petabytes, a user has the propensity to transfer even larger amounts of data. When transferring data sets of this magnitude on public or shared networks, the performance of all workloads in the system will be impacted. This dissertation addresses the issues and challenges inherent with transferring big data over shared networks. A survey of current transfer techniques is provided and these techniques are evaluated in simulated, experimental and live environments. The main contribution of this dissertation is the development of a new, nice model for big data transfers, which is based on a store-and-forward methodology instead of an end-to-end approach. This nice model ensures that big data transfers only occur when there is idle bandwidth that can be repurposed for these large transfers. The nice model improves overall performance and significantly reduces the transmission time for big data transfers. The model allows for efficient transfers regardless of time zone differences or variations in bandwidth between sender and receiver. Nice is the first model that addresses the challenges of transferring big data across the globe

    EFFECTIVE GROUPING FOR ENERGY AND PERFORMANCE: CONSTRUCTION OF ADAPTIVE, SUSTAINABLE, AND MAINTAINABLE DATA STORAGE

    Get PDF
    The performance gap between processors and storage systems has been increasingly critical overthe years. Yet the performance disparity remains, and further, storage energy consumption israpidly becoming a new critical problem. While smarter caching and predictive techniques domuch to alleviate this disparity, the problem persists, and data storage remains a growing contributorto latency and energy consumption.Attempts have been made at data layout maintenance, or intelligent physical placement ofdata, yet in practice, basic heuristics remain predominant. Problems that early studies soughtto solve via layout strategies were proven to be NP-Hard, and data layout maintenance todayremains more art than science. With unknown potential and a domain inherently full of uncertainty,layout maintenance persists as an area largely untapped by modern systems. But uncertainty inworkloads does not imply randomness; access patterns have exhibited repeatable, stable behavior.Predictive information can be gathered, analyzed, and exploited to improve data layouts. Ourgoal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts byreplicating data at the storage device level.We present a comprehensive discussion of the design and construction of such a predictive engine,including workload evaluation, where we present and evaluate classical workloads as well asour own highly detailed traces collected over an extended period. We demonstrate significant gainsthrough an initial static grouping mechanism, and compare against an optimal grouping method ofour own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online)grouping, and provide motivation and solutions for addressing these challenges. These challengesinclude metadata storage, appropriate predictive collocation, online performance, and physicalplacement. We reduced the metadata needed by several orders of magnitude, reducing the requiredvolume from more than 14% of total storage down to less than 12%. We also demonstrate how ourcollocation strategies outperform competing techniques. Finally, we present our complete modeland evaluate a prototype implementation against real hardware. This model was demonstrated tobe capable of reducing device-level accesses by up to 65%

    Energy and Performance Evaluation of Lossless File Data Compression on Server Systems

    No full text
    Data compression has been claimed to be an attractive solution to save energy consumption in high-end servers and data centers. However, there has not been a study to explore this. In this paper, we present a comprehensive evaluation of energy consumption for various file compression techniques implemented in software. We apply various compression tools available on Linux to a variety of data files, and we try them on server class and workstation class systems. We compare their energy and performance results against raw reads and writes. Our results reveal that software based data compression cannot be considered as a universal solution to reduce energy consumption. Various factors like the type of the data file, the compression tool being used, the read-to-write ratio of the workload, and the hardware configuration of the system impact the efficacy of this technique. In some cases, however, we found compression to save substantial energy and improve performance
    corecore