4,200 research outputs found

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Online Algorithms for Geographical Load Balancing

    Get PDF
    It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective

    Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    Full text link
    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize the system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.Comment: 5 pages, preprint for Asilomar Conference on Signals, Systems, and Computers 201

    Managing server energy and reducing operational cost for online service providers

    Get PDF
    The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    Transport Energy Security. The Unseen Risk?

    Get PDF
    The decline in significance given to energy security in recent years can be associated with increasing trust in the self-balancing security of a global-trading economy. After the events of the first years of the 21st century, that framework now looks more problematic, at least for oil supplies. The underlying level of risk that characterised the oil market of the late 20th century has changed, exacerbated by the increasing inelasticity of demand for oil-based products in the transport sector of the world’s economies, which in its turn reflects the strategic dominance of transport within economies. The prudent course for the international community is to reduce the underlying causes of possible geopolitical constraints by making them more manageable through normal channels. One such constraint that is within every nation’s capability (and self-interest) to reduce is the upward drift in the price inelasticity of domestic oil consumption. This could involve increasing the ability to divert oil used within the domestic economy to transport. Yet for many industrial economies, this option has largely been exhausted and a more radical approach of opening up new energy vectors to supply the transport sector may be needed. Taking preventative action after a security event is generally more straightforward than taking precautionary action to ensure that it never happens. The latter course may only be successful through a coincidence with other interests. The current environment agenda is such a coincident interest with transport fuel security.Transport energy security, Risk

    A survey on energy efficiency in information systems

    Get PDF
    Concerns about energy and sustainability are growing everyday involving a wide range of fields. Even Information Systems (ISs) are being influenced by the issue of reducing pollution and energy consumption and new fields are rising dealing with this topic. One of these fields is Green Information Technology (IT), which deals with energy efficiency with a focus on IT. Researchers have faced this problem according to several points of view. The purpose of this paper is to understand the trends and the future development of Green IT by analyzing the state-of-the-art and classifying existing approaches to understand which are the components that have an impact on energy efficiency in ISs and how this impact can be reduced. At first, we explore some guidelines that can help to understand the efficiency level of an organization and of an IS. Then, we discuss measurement and estimation of energy efficiency and identify which are the components that mainly contribute to energy waste and how it is possible to improve energy efficiency, both at the hardware and at the software level

    BS News September/October

    Get PDF
    • …
    corecore