2,227 research outputs found

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Cutting the Electric Bill for Internet-Scale Systems

    Get PDF
    Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.NokiaNational Science Foundatio

    Scenarios for the development of smart grids in the UK: literature review

    Get PDF
    Smart grids are expected to play a central role in any transition to a low-carbon energy future, and much research is currently underway on practically every area of smart grids. However, it is evident that even basic aspects such as theoretical and operational definitions, are yet to be agreed upon and be clearly defined. Some aspects (efficient management of supply, including intermittent supply, two-way communication between the producer and user of electricity, use of IT technology to respond to and manage demand, and ensuring safe and secure electricity distribution) are more commonly accepted than others (such as smart meters) in defining what comprises a smart grid. It is clear that smart grid developments enjoy political and financial support both at UK and EU levels, and from the majority of related industries. The reasons for this vary and include the hope that smart grids will facilitate the achievement of carbon reduction targets, create new employment opportunities, and reduce costs relevant to energy generation (fewer power stations) and distribution (fewer losses and better stability). However, smart grid development depends on additional factors, beyond the energy industry. These relate to issues of public acceptability of relevant technologies and associated risks (e.g. data safety, privacy, cyber security), pricing, competition, and regulation; implying the involvement of a wide range of players such as the industry, regulators and consumers. The above constitute a complex set of variables and actors, and interactions between them. In order to best explore ways of possible deployment of smart grids, the use of scenarios is most adequate, as they can incorporate several parameters and variables into a coherent storyline. Scenarios have been previously used in the context of smart grids, but have traditionally focused on factors such as economic growth or policy evolution. Important additional socio-technical aspects of smart grids emerge from the literature review in this report and therefore need to be incorporated in our scenarios. These can be grouped into four (interlinked) main categories: supply side aspects, demand side aspects, policy and regulation, and technical aspects.

    Managing server energy and reducing operational cost for online service providers

    Get PDF
    The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter

    Cloud Index Tracking: Enabling Predictable Costs in Cloud Spot Markets

    Full text link
    Cloud spot markets rent VMs for a variable price that is typically much lower than the price of on-demand VMs, which makes them attractive for a wide range of large-scale applications. However, applications that run on spot VMs suffer from cost uncertainty, since spot prices fluctuate, in part, based on supply, demand, or both. The difficulty in predicting spot prices affects users and applications: the former cannot effectively plan their IT expenditures, while the latter cannot infer the availability and performance of spot VMs, which are a function of their variable price. To address the problem, we use properties of cloud infrastructure and workloads to show that prices become more stable and predictable as they are aggregated together. We leverage this observation to define an aggregate index price for spot VMs that serves as a reference for what users should expect to pay. We show that, even when the spot prices for individual VMs are volatile, the index price remains stable and predictable. We then introduce cloud index tracking: a migration policy that tracks the index price to ensure applications running on spot VMs incur a predictable cost by migrating to a new spot VM if the current VM's price significantly deviates from the index price.Comment: ACM Symposium on Cloud Computing 201
    corecore