9 research outputs found

    Extending Demand Response to Tenants in Cloud Data Centers via Non-intrusive Workload Flexibility Pricing

    Full text link
    Participating in demand response programs is a promising tool for reducing energy costs in data centers by modulating energy consumption. Towards this end, data centers can employ a rich set of resource management knobs, such as workload shifting and dynamic server provisioning. Nonetheless, these knobs may not be readily available in a cloud data center (CDC) that serves cloud tenants/users, because workloads in CDCs are managed by tenants themselves who are typically charged based on a usage-based or flat-rate pricing and often have no incentive to cooperate with the CDC operator for demand response and cost saving. Towards breaking such "split incentive" hurdle, a few recent studies have tried market-based mechanisms, such as dynamic pricing, inside CDCs. However, such mechanisms often rely on complex designs that are hard to implement and difficult to cope with by tenants. To address this limitation, we propose a novel incentive mechanism that is not dynamic, i.e., it keeps pricing for cloud resources unchanged for a long period. While it charges tenants based on a Usage-based Pricing (UP) as used by today's major cloud operators, it rewards tenants proportionally based on the time length that tenants set as deadlines for completing their workloads. This new mechanism is called Usage-based Pricing with Monetary Reward (UPMR). We demonstrate the effectiveness of UPMR both analytically and empirically. We show that UPMR can reduce the CDC operator's energy cost by 12.9% while increasing its profit by 4.9%, compared to the state-of-the-art approaches used by today's CDC operators to charge their tenants

    A Sensor-Actuator Model for Data Center Optimization

    Full text link
    Cloud data centers commonly use virtualization technologies to provision compute capacity with a level of indirection between virtual machines and physical resources. In this paper we explore the use of that level of indirection as a means for autonomic data center configuration optimization and propose a sensor-actuator model to capture optimization-relevant relationships between data center events, monitored metrics (sensors data), and management actions (actuators). The model characterizes a wide spectrum of actions to help identify the suitability of different actions in specific situations, and outlines what (and how often) data needs to be monitored to capture, classify, and respond to events that affect the performance of data center operations

    Dynamic virtual network traffic engineering with energy efficiency in multi-location data center networks

    Get PDF
    Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)For cloud enterprise customers that require services on demand, data centers must allocate and partition data center resources in a dynamic fashion. We consider the problem in which a request from an enterprise customer is mapped to a virtual network (VN) that is allocated requiring both bandwidth and compute resources by connecting it from an entry point of a data center to one or more servers, should this data center be selected from multiple geographically distributed data centers. We present a dynamic traffic engineering framework, for which we develop an optimization model based on a mixed-integer linear programming (MILP) formulation that a data center operator can use at each review point to optimally assign VN customers. Through a series of studies, we then present results on how different VN customers are treated in terms of request acceptance when each VN class has a different resource requirement. We found that a VN class with a low resource requirement has a low blocking even in heavy traffic, while the VN class with a high resource requirement faces a high service denial. On the other hand, cost for the VN with the highest resource requirement is not always the highest in the heavy traffic because of the significantly high service denial faced by this VN class.1017School of Graduate Studies Research Grant at the University of Missouri-Kansas CityNational Science Foundation [1526299]CNPq (National Counsel of Technological and Scientific Development), BrazilConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)28th International Teletraffic Congress (ITC)SEP 12-16, 2016Univ Wurzburg, Wurzburg, GERMAN

    Energy-based Cost Model of Virtual Machines in a Cloud Environment

    Get PDF
    The cost mechanisms employed by different service providers significantly influence the role of cloud computing within the IT industry. With the increasing cost of electricity, Cloud providers consider power consumption as one of the major cost factors to be maintained within their infrastructures. Consequently, modelling a new cost mechanism for Cloud services that can be adjusted to the actual energy costs has attracted the attention of many researchers. This paper introduces an Energy-based Cost Model that considers energy consumption as a key parameter with respect to the actual resource usage and the total cost of the Virtual Machines (VMs). A series of experiments conducted on a Cloud testbed show that this model is capable of estimating the actual cost for heterogeneous VMs based on their resource usage with consideration of their energy consumption

    Simple and effective dynamic provisioning for power-proportional data centers.

    Get PDF
    数据中心在运转过程中需要消耗大量的电能。但是这其中的很大一部分电能都在工作负荷少时被空闲的服务器消耗了。动态供应技术通过在工作负荷少时,关掉不必要的服务器来节省这一部分的电能。在这篇文章中,我们研究未来工作负荷信息到底能给动态供应带来多大好处。特别地,对于有或没有未来工作负荷信息的两种情况,我们提出了在线的动态供应算法。我们首先发现了离线动态供应的最优解有一个很优美的结构,通过这个优美的结构我们可以以分而治之的方法完全确定离线动态供应的最优解。在这个基础之上我们设计了两个在线算法,它们的竞争比分别为2-α和e/(e - 1 + α),其中α表示标准化的预测未来窗口的长度。在这个预测未来窗口中,未来的工作负荷信息可以精确的获得。一个重要的发现是超出一个完整的预测未来窗口的未来工作负荷信息不会对动态供应的性能有任何提高。我们提出的在线算法是分散的,因此易于实现。最后,我们用真是数据中心的数据测试了我们的在线算法。在设计在线算法的时候,我们利用了未来工作负荷信息。这是因为在很多的现代系统中,短期的未来工作信息可以被精确的估计。我们也测试了我们的算法在有预测噪声时候的性能,结果表明我们的算法在有噪声时,也能很好的工作。我们相信利用未来信息是设计在线算法的一个新的角度。在传统的在线算法设计过程中,我们通常不考虑未来输入信息。在这种情况下,许多在线问题有简单的最优的算法,但是这个最优算法的竞争比却很大。其实未来输入信息在很多在线问题中都能在一定程度上被精确预测,所以我们相信我们可以利用这些未来输入信息去设计竞争比较小的在线算法,这样设计的在线算法具有更多的应用优点,并在理论上也给予我们启发。Energy consumption represents a significant cost in data center operation. A large fraction of the energy however, is used to power idle servers when the workload is low. Dynamic provisioning techniques aim at saving this portion of the energy by turning of unnecessary servers. In this thesis we explore how much gain knowing future workload information can bring to dynamic pro-visioning. In particular we develop online dynamic provisioning solutions with and without future workload information available. We first reveal an elegant structure of the offline dynamic pro-visioning problem which allows us to characterize the optimal solution in a "divide-and-conquer" manner. We then exploit this insight to design two online algorithms with competitive ratios 2 - α and e/ (e - 1+ α), respectively where 0 ≤ α ≤ 1 is the normalized size of a look-ahead window in which future workload information is available. A fundamental observation is that future workload information beyond the full-size look-ahead window (corresponding to α =1) will not improve dynamic provisioning performance. Our algorithms are decentralized and easy to im-plement. We demonstrate their effectiveness in simulations using real-world traces.When designing online algorithms, we utilize future input information because for many modern systems their short-term future inputs can be predicted by machine learning time-series analysis etc. We also test our algorithms in the presence of prediction errors in future workload information and the results show that our algorithms are robust to prediction errors. We believe that utilizing future information is a new and important degree of freedom in designing online algorithms. In traditional online algo¬rithm design future input information is not taken into account. Many online problems have online algorithms with optimal but large competitive ratios. Since future input information to some extent can be estimated accurately in many problems we believe that we should exploit such information in online algorithm design to achieve better competitive ratio and provide more competitive edge in both practice and theory.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Lu, Tan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 76-81).Abstracts also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Contributions --- p.4Chapter 1.3 --- Thesis Organization --- p.5Chapter 2 --- Related Work --- p.6Chapter 3 --- Problem Formulation --- p.10Chapter 3.1 --- Settings and Models --- p.10Chapter 3.2 --- Problem Formulation --- p.13Chapter 4 --- Optimal Solution and Offline Algorithm --- p.15Chapter 4.1 --- Structure of Optimal Solution --- p.15Chapter 4.2 --- Intuitions and Observations --- p.17Chapter 4.3 --- Offline Algorithm Achieving the Optimal Solution --- p.18Chapter 5 --- Online Dynamic Provisioning --- p.21Chapter 5.1 --- Dynamic Provisioning without FutureWorkload Information --- p.22Chapter 5.2 --- Dynamic Provisioning with Future Workload Information --- p.23Chapter 5.3 --- Adapting the Algorithms to Work with Discrete-Time Fluid Workload Model --- p.31Chapter 5.4 --- Extending to Case Where Servers Have Setup Time --- p.32Chapter 6 --- Experiments --- p.35Chapter 6.1 --- Settings --- p.35Chapter 6.2 --- Performance of the Proposed Online Algorithms --- p.38Chapter 6.3 --- Impact of Prediction Error --- p.39Chapter 6.4 --- Impact of Peak-to-Mean Ratio (PMR) --- p.40Chapter 6.5 --- Discussion --- p.40Chapter 6.6 --- Additional Experiments --- p.41Chapter 7 --- A New Degree of Freedom for Designing Online Algorithm --- p.44Chapter 7.1 --- The Lost Cow Problem --- p.45Chapter 7.2 --- Secretary Problem without Future Information --- p.47Chapter 7.3 --- Secretary Problem with Future Information --- p.48Chapter 7.4 --- Summary --- p.50Chapter 8 --- Conclusion --- p.51Chapter A --- Proof --- p.54Chapter A.1 --- Proof of Theorem 4.1.1 --- p.54Chapter A.2 --- Proof of Theorem 4.3.1 --- p.57Chapter A.3 --- Least idle vs last empty --- p.60Chapter A.4 --- Proof of Theorem 5.2.2 --- p.61Chapter A.5 --- Proof of Corollary 5.4.1 --- p.70Chapter A.6 --- Proof of Lemma 7.1.1 --- p.72Chapter A.7 --- Proof of Theorem 7.3.1 --- p.74Bibliography --- p.7
    corecore