4,385 research outputs found

    Simple and effective dynamic provisioning for power-proportional data centers.

    Get PDF
    数据中心在运转过程中需要消耗大量的电能。但是这其中的很大一部分电能都在工作负荷少时被空闲的服务器消耗了。动态供应技术通过在工作负荷少时,关掉不必要的服务器来节省这一部分的电能。在这篇文章中,我们研究未来工作负荷信息到底能给动态供应带来多大好处。特别地,对于有或没有未来工作负荷信息的两种情况,我们提出了在线的动态供应算法。我们首先发现了离线动态供应的最优解有一个很优美的结构,通过这个优美的结构我们可以以分而治之的方法完全确定离线动态供应的最优解。在这个基础之上我们设计了两个在线算法,它们的竞争比分别为2-α和e/(e - 1 + α),其中α表示标准化的预测未来窗口的长度。在这个预测未来窗口中,未来的工作负荷信息可以精确的获得。一个重要的发现是超出一个完整的预测未来窗口的未来工作负荷信息不会对动态供应的性能有任何提高。我们提出的在线算法是分散的,因此易于实现。最后,我们用真是数据中心的数据测试了我们的在线算法。在设计在线算法的时候,我们利用了未来工作负荷信息。这是因为在很多的现代系统中,短期的未来工作信息可以被精确的估计。我们也测试了我们的算法在有预测噪声时候的性能,结果表明我们的算法在有噪声时,也能很好的工作。我们相信利用未来信息是设计在线算法的一个新的角度。在传统的在线算法设计过程中,我们通常不考虑未来输入信息。在这种情况下,许多在线问题有简单的最优的算法,但是这个最优算法的竞争比却很大。其实未来输入信息在很多在线问题中都能在一定程度上被精确预测,所以我们相信我们可以利用这些未来输入信息去设计竞争比较小的在线算法,这样设计的在线算法具有更多的应用优点,并在理论上也给予我们启发。Energy consumption represents a significant cost in data center operation. A large fraction of the energy however, is used to power idle servers when the workload is low. Dynamic provisioning techniques aim at saving this portion of the energy by turning of unnecessary servers. In this thesis we explore how much gain knowing future workload information can bring to dynamic pro-visioning. In particular we develop online dynamic provisioning solutions with and without future workload information available. We first reveal an elegant structure of the offline dynamic pro-visioning problem which allows us to characterize the optimal solution in a "divide-and-conquer" manner. We then exploit this insight to design two online algorithms with competitive ratios 2 - α and e/ (e - 1+ α), respectively where 0 ≤ α ≤ 1 is the normalized size of a look-ahead window in which future workload information is available. A fundamental observation is that future workload information beyond the full-size look-ahead window (corresponding to α =1) will not improve dynamic provisioning performance. Our algorithms are decentralized and easy to im-plement. We demonstrate their effectiveness in simulations using real-world traces.When designing online algorithms, we utilize future input information because for many modern systems their short-term future inputs can be predicted by machine learning time-series analysis etc. We also test our algorithms in the presence of prediction errors in future workload information and the results show that our algorithms are robust to prediction errors. We believe that utilizing future information is a new and important degree of freedom in designing online algorithms. In traditional online algo¬rithm design future input information is not taken into account. Many online problems have online algorithms with optimal but large competitive ratios. Since future input information to some extent can be estimated accurately in many problems we believe that we should exploit such information in online algorithm design to achieve better competitive ratio and provide more competitive edge in both practice and theory.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Lu, Tan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 76-81).Abstracts also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Contributions --- p.4Chapter 1.3 --- Thesis Organization --- p.5Chapter 2 --- Related Work --- p.6Chapter 3 --- Problem Formulation --- p.10Chapter 3.1 --- Settings and Models --- p.10Chapter 3.2 --- Problem Formulation --- p.13Chapter 4 --- Optimal Solution and Offline Algorithm --- p.15Chapter 4.1 --- Structure of Optimal Solution --- p.15Chapter 4.2 --- Intuitions and Observations --- p.17Chapter 4.3 --- Offline Algorithm Achieving the Optimal Solution --- p.18Chapter 5 --- Online Dynamic Provisioning --- p.21Chapter 5.1 --- Dynamic Provisioning without FutureWorkload Information --- p.22Chapter 5.2 --- Dynamic Provisioning with Future Workload Information --- p.23Chapter 5.3 --- Adapting the Algorithms to Work with Discrete-Time Fluid Workload Model --- p.31Chapter 5.4 --- Extending to Case Where Servers Have Setup Time --- p.32Chapter 6 --- Experiments --- p.35Chapter 6.1 --- Settings --- p.35Chapter 6.2 --- Performance of the Proposed Online Algorithms --- p.38Chapter 6.3 --- Impact of Prediction Error --- p.39Chapter 6.4 --- Impact of Peak-to-Mean Ratio (PMR) --- p.40Chapter 6.5 --- Discussion --- p.40Chapter 6.6 --- Additional Experiments --- p.41Chapter 7 --- A New Degree of Freedom for Designing Online Algorithm --- p.44Chapter 7.1 --- The Lost Cow Problem --- p.45Chapter 7.2 --- Secretary Problem without Future Information --- p.47Chapter 7.3 --- Secretary Problem with Future Information --- p.48Chapter 7.4 --- Summary --- p.50Chapter 8 --- Conclusion --- p.51Chapter A --- Proof --- p.54Chapter A.1 --- Proof of Theorem 4.1.1 --- p.54Chapter A.2 --- Proof of Theorem 4.3.1 --- p.57Chapter A.3 --- Least idle vs last empty --- p.60Chapter A.4 --- Proof of Theorem 5.2.2 --- p.61Chapter A.5 --- Proof of Corollary 5.4.1 --- p.70Chapter A.6 --- Proof of Lemma 7.1.1 --- p.72Chapter A.7 --- Proof of Theorem 7.3.1 --- p.74Bibliography --- p.7

    Online Algorithms for Geographical Load Balancing

    Get PDF
    It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    Online VNF Scaling in Datacenters

    Get PDF
    Network Function Virtualization (NFV) is a promising technology that promises to significantly reduce the operational costs of network services by deploying virtualized network functions (VNFs) to commodity servers in place of dedicated hardware middleboxes. The VNFs are typically running on virtual machine instances in a cloud infrastructure, where the virtualization technology enables dynamic provisioning of VNF instances, to process the fluctuating traffic that needs to go through the network functions in a network service. In this paper, we target dynamic provisioning of enterprise network services - expressed as one or multiple service chains - in cloud datacenters, and design efficient online algorithms without requiring any information on future traffic rates. The key is to decide the number of instances of each VNF type to provision at each time, taking into consideration the server resource capacities and traffic rates between adjacent VNFs in a service chain. In the case of a single service chain, we discover an elegant structure of the problem and design an efficient randomized algorithm achieving a e/(e-1) competitive ratio. For multiple concurrent service chains, an online heuristic algorithm is proposed, which is O(1)-competitive. We demonstrate the effectiveness of our algorithms using solid theoretical analysis and trace-driven simulations.Comment: 9 pages, 4 figure

    Energy-aware Load Balancing Policies for the Cloud Ecosystem

    Full text link
    The energy consumption of computer and communication systems does not scale linearly with the workload. A system uses a significant amount of energy even when idle or lightly loaded. A widely reported solution to resource management in large data centers is to concentrate the load on a subset of servers and, whenever possible, switch the rest of the servers to one of the possible sleep states. We propose a reformulation of the traditional concept of load balancing aiming to optimize the energy consumption of a large-scale system: {\it distribute the workload evenly to the smallest set of servers operating at an optimal energy level, while observing QoS constraints, such as the response time.} Our model applies to clustered systems; the model also requires that the demand for system resources to increase at a bounded rate in each reallocation interval. In this paper we report the VM migration costs for application scaling.Comment: 10 Page
    corecore