4,857 research outputs found

    Extending Demand Response to Tenants in Cloud Data Centers via Non-intrusive Workload Flexibility Pricing

    Full text link
    Participating in demand response programs is a promising tool for reducing energy costs in data centers by modulating energy consumption. Towards this end, data centers can employ a rich set of resource management knobs, such as workload shifting and dynamic server provisioning. Nonetheless, these knobs may not be readily available in a cloud data center (CDC) that serves cloud tenants/users, because workloads in CDCs are managed by tenants themselves who are typically charged based on a usage-based or flat-rate pricing and often have no incentive to cooperate with the CDC operator for demand response and cost saving. Towards breaking such "split incentive" hurdle, a few recent studies have tried market-based mechanisms, such as dynamic pricing, inside CDCs. However, such mechanisms often rely on complex designs that are hard to implement and difficult to cope with by tenants. To address this limitation, we propose a novel incentive mechanism that is not dynamic, i.e., it keeps pricing for cloud resources unchanged for a long period. While it charges tenants based on a Usage-based Pricing (UP) as used by today's major cloud operators, it rewards tenants proportionally based on the time length that tenants set as deadlines for completing their workloads. This new mechanism is called Usage-based Pricing with Monetary Reward (UPMR). We demonstrate the effectiveness of UPMR both analytically and empirically. We show that UPMR can reduce the CDC operator's energy cost by 12.9% while increasing its profit by 4.9%, compared to the state-of-the-art approaches used by today's CDC operators to charge their tenants

    Eco-friendly Power Cost Minimization for Geo-distributed Data Centers Considering Workload Scheduling

    Full text link
    The rapid development of renewable energy in the energy Internet is expected to alleviate the increasingly severe power problem in data centers, such as the huge power costs and pollution. This paper focuses on the eco-friendly power cost minimization for geo-distributed data centers supplied by multi-source power, where the geographical scheduling of workload and temporal scheduling of batteries' charging and discharging are both considered. Especially, we innovatively propose the Pollution Index Function to model the pollution of different kinds of power, which can encourage the use of cleaner power and improve power savings. We first formulate the eco-friendly power cost minimization problem as a multi-objective and mixed-integer programming problem, and then simplify it as a single-objective problem with integer constraints. Secondly, we propose a Sequential Convex Programming (SCP) algorithm to find the globally optimal non-integer solution of the simplified problem, which is non-convex, and then propose a low-complexity searching method to seek for the quasi-optimal mixed-integer solution of it. Finally, simulation results reveal that our method can improve the clean energy usage up to 50\%--60\% and achieve power cost savings up to 10\%--30\%, as well as reduce the delay of requests.Comment: 14 pages, 19 figure

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    Proactive Demand Response for Data Centers: A Win-Win Solution

    Full text link
    In order to reduce the energy cost of data centers, recent studies suggest distributing computation workload among multiple geographically dispersed data centers, by exploiting the electricity price difference. However, the impact of data center load redistribution on the power grid is not well understood yet. This paper takes the first step towards tackling this important issue, by studying how the power grid can take advantage of the data centers' load distribution proactively for the purpose of power load balancing. We model the interactions between power grid and data centers as a two-stage problem, where the utility company chooses proper pricing mechanisms to balance the electric power load in the first stage, and the data centers seek to minimize their total energy cost by responding to the prices in the second stage. We show that the two-stage problem is a bilevel quadratic program, which is NP-hard and cannot be solved using standard convex optimization techniques. We introduce benchmark problems to derive upper and lower bounds for the solution of the two-stage problem. We further propose a branch and bound algorithm to attain the globally optimal solution, and propose a heuristic algorithm with low computational complexity to obtain an alternative close-to-optimal solution. We also study the impact of background load prediction error using the theoretical framework of robust optimization. The simulation results demonstrate that our proposed scheme can not only improve the power grid reliability but also reduce the energy cost of data centers

    A Minimum-Cost Flow Model for Workload Optimization on Cloud Infrastructure

    Full text link
    Recent technology advancements in the areas of compute, storage and networking, along with the increased demand for organizations to cut costs while remaining responsive to increasing service demands have led to the growth in the adoption of cloud computing services. Cloud services provide the promise of improved agility, resiliency, scalability and a lowered Total Cost of Ownership (TCO). This research introduces a framework for minimizing cost and maximizing resource utilization by using an Integer Linear Programming (ILP) approach to optimize the assignment of workloads to servers on Amazon Web Services (AWS) cloud infrastructure. The model is based on the classical minimum-cost flow model, known as the assignment model.Comment: 2017 IEEE 10th International Conference on Cloud Computin

    Open-Source Simulators for Cloud Computing: Comparative Study and Challenging Issues

    Full text link
    Resource scheduling in infrastructure as a service (IaaS) is one of the keys for large-scale Cloud applications. Extensive research on all issues in real environment is extremely difficult because it requires developers to consider network infrastructure and the environment, which may be beyond the control. In addition, the network conditions cannot be controlled or predicted. Performance evaluations of workload models and Cloud provisioning algorithms in a repeatable manner under different configurations are difficult. Therefore, simulators are developed. To understand and apply better the state-of-the-art of cloud computing simulators, and to improve them, we study four known open-source simulators. They are compared in terms of architecture, modeling elements, simulation process, performance metrics and scalability in performance. Finally, a few challenging issues as future research trends are outlined.Comment: 15 pages, 11 figures, accepted for publication in Journal: Simulation Modelling Practice and Theor

    An NBDMMM Algorithm Based Framework for Allocation of Resources in Cloud

    Full text link
    Cloud computing is a technological advancement in the arena of computing and has taken the utility vision of computing a step further by providing computing resources such as network, storage, compute capacity and servers, as a service via an internet connection. These services are provided to the users in a pay per use manner subjected to the amount of usage of these resources by the cloud users. Since the usage of these resources is done in an elastic manner thus an on demand provisioning of these resources is the driving force behind the entire cloud computing infrastructure therefore the maintenance of these resources is a decisive task that must be taken into account. Eventually, infrastructure level performance monitoring and enhancement is also important. This paper proposes a framework for allocation of resources in a cloud based environment thereby leading to an infrastructure level enhancement of performance in a cloud environment. The framework is divided into four stages Stage 1: Cloud service provider monitors the infrastructure level pattern of usage of resources and behavior of the cloud users. Stage 2: Report the monitoring activities about the usage to cloud service providers. Stage 3: Apply proposed Network Bandwidth Dependent DMMM algorithm .Stage 4: Allocate resources or provide services to cloud users, thereby leading to infrastructure level performance enhancement and efficient management of resources. Analysis of resource usage pattern is considered as an important factor for proper allocation of resources by the service providers, in this paper Google cluster trace has been used for accessing the resource usage pattern in cloud. Experiments have been conducted on cloudsim simulation framework and the results reveal that NBDMMM algorithm improvises allocation of resources in a virtualized cloud

    Achieving Energy Efficiency in Cloud Brokering

    Full text link
    The proliferation of cloud providers has brought substantial interoperability complexity to the public cloud market, in which cloud brokering has been playing an important role. However, energy-related issues for public clouds have not been well addressed in the literature. In this paper, we claim that the broker is also situated in a perfect position where necessary actions can be taken to achieve energy efficiency for public cloud systems, particularly through job assignment and scheduling. We formulate the problem by a mixed integer program and prove its NP-hardness. Based on the complexity analysis, we simplify the problem by introducing admission control on jobs. In the sequel, optimal job assignment can be done straightforwardly and the problem is transformed into improving job admission rate by scheduling on two coupled phases: data transfer and job execution. The two scheduling phases are further decoupled and we develop efficient scheduling algorithm for each of them. Experimental results show that the proposed solution can achieve significant reduction on energy consumption with admission rates improved as well, even in large-scale public cloud systems

    Online Learning for Offloading and Autoscaling in Energy Harvesting Mobile Edge Computing

    Full text link
    Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in energy harvesting mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to the centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run-time performance when compared to standard reinforcement learning algorithms such as Q-learning. We prove the convergence of the proposed algorithm and analytically show that the learned policy has a simple monotone structure amenable to practical implementation. Our simulation results validate the efficacy of our algorithm, which significantly improves the edge computing performance compared to fixed or myopic optimization schemes and conventional reinforcement learning algorithms.Comment: arXiv admin note: text overlap with arXiv:1701.01090 by other author

    Disaggregation for Improved Efficiency in Fog Computing Era

    Full text link
    This paper evaluates the impact of using disaggregated servers in the near-edge of telecom networks (metro central offices, radio cell sites and enterprise branch office which form part of a Fog as a Service system) to minimize the number of fog nodes required in the far-edge of telecom networks. We formulated a mixed integer linear programming (MILP) model to this end. Our results show that replacing traditional servers with disaggregated servers in the near-edge of the telecom network can reduce the number of far-edge fog nodes required by up to 50% if access to near-edge computing resources is not limited by network bottlenecks. This improved efficiency is achieved at the cost of higher average hop count between workload sources and processing locations and marginal increases in overall metro and access networks traffic and power consumption.Comment: Conferenc
    • …
    corecore