28 research outputs found

    Evolutionary Neural Network Based Energy Consumption Forecast for Cloud Computing

    Get PDF
    The success of Hadoop, an open-source framework for massively parallel and distributed computing, is expected to drive energy consumption of cloud data centers to new highs as service providers continue to add new infrastructure, services and capabilities to meet the market demands. While current research on data center airflow management, HVAC (Heating, Ventilation and Air Conditioning) system design, workload distribution and optimization, and energy efficient computing hardware and software are all contributing to improved energy efficiency, energy forecast in cloud computing remains a challenge. This paper reports an evolutionary computation based modeling and forecasting approach to this problem. In particular, an evolutionary neural network is developed and structurally optimized to forecast the energy load of a cloud data center. The results, both in terms of forecasting speed and accuracy, suggest that the evolutionary neural network approach to energy consumption forecasting for cloud computing is highly promising

    Energy policies for data-center monolithic schedulers

    Get PDF
    Cloud computing and data centers that support this paradigm are rapidly evolving in order to satisfy new demands. These ever-growing needs represent an energy-related challenge to achieve sustainability and cost reduction. In this paper, we define an expert and intelligent system that applies various en ergy policies. These policies are employed to maximize the energy-efficiency of data-center resources by simulating a realistic environment and heterogeneous workload in a trustworthy tool. An environmental and economic impact of around 20% of energy consumption can be saved in high-utilization scenarios without exerting any noticeable impact on data-center performance if an adequate policy is applied

    Making the case for reforming the I/O software stack of extreme-scale systems

    Get PDF
    This work was supported in part by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research, under Contract No. DE-AC02-05CH11231. This research has been partially funded by the Spanish Ministry of Science and Innovation under grant TIN2010-16497 “Input/Output techniques for distributed and high-performance computing environments”. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement number 328582

    Energy-Aware Load Balancing in Content Delivery Networks

    Full text link
    Internet-scale distributed systems such as content delivery networks (CDNs) operate hundreds of thousands of servers deployed in thousands of data center locations around the globe. Since the energy costs of operating such a large IT infrastructure are a significant fraction of the total operating costs, we argue for redesigning CDNs to incorporate energy optimizations as a first-order principle. We propose techniques to turn off CDN servers during periods of low load while seeking to balance three key design goals: maximize energy reduction, minimize the impact on client-perceived service availability (SLAs), and limit the frequency of on-off server transitions to reduce wear-and-tear and its impact on hardware reliability. We propose an optimal offline algorithm and an online algorithm to extract energy savings both at the level of local load balancing within a data center and global load balancing across data centers. We evaluate our algorithms using real production workload traces from a large commercial CDN. Our results show that it is possible to reduce the energy consumption of a CDN by more than 55% while ensuring a high level of availability that meets customer SLA requirements and incurring an average of one on-off transition per server per day. Further, we show that keeping even 10% of the servers as hot spares helps absorb load spikes due to global flash crowds with little impact on availability SLAs. Finally, we show that redistributing load across proximal data centers can enhance service availability significantly, but has only a modest impact on energy savings

    Limiting Global Warming by Improving Data-Centre Software

    Get PDF
    Carbon emissions, greenhouse gases and pollution in general are usually related to traditional factories, so the most modern computing factories have gone unnoticed for the general-public opinion. We empirically show through extensive and realistic simulation that: 1) energy consumption, and consequently CO2 emissions, could be reduced from ~15% to ~60% if the correct energy-efficiency policies are applied; and 2) such energy-consumption reduction can be achieved without negatively impacting the correct operation of these infrastructures. To this end, this work is focused on the proposal and analysis of a set of energy-efficiency policies which are applied to traditional and hyper-scale data centres, as well as numerous operation environments, including: 1) the top resource managers used in industry; 2) eight energy-efficiency policies, including aggressive, fine-tuned and adaptive models; and 3) three types of workload-arrival patterns. Finally, we present a realistic analysis of the environmental impact of the application of such energy-efficiency policies on USA data centres. The presented results estimate that 11.5 million of tons of CO2 could be saved, which is equivalent to the removal of 4.79 million of combustion cars, that is, the total car fleet of countries such as Portugal, Austria and Sweden.Ministerio de Ciencia e Innovación RTI2018-098062-A-I0

    Characterizing the impact of the workload on the value of dynamic resizing in data centers

    Get PDF
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time-scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits
    corecore