26 research outputs found

    Dynamic resource scheduling in cloud radio access network with mobile cloud computing

    Get PDF
    Nowadays, by integrating the cloud radio access network (C-RAN) with the mobile cloud computing (MCC) technology, mobile service provider (MSP) can efficiently handle the increasing mobile traffic and enhance the capabilities of mobile users' devices to provide better quality of service (QoS). But the power consumption has become skyrocketing for MSP as it gravely affects the profit of MSP. Previous work often studied the power consumption in C-RAN and MCC separately while less work had considered the integration of C-RAN with MCC. In this paper, we present a unifying framework for optimizing the power-performance tradeoff of MSP by jointly scheduling network resources in C-RAN and computation resources in MCC to minimize the power consumption of MSP while still guaranteeing the QoS for mobile users. Our objective is to maximize the profit of MSP. To achieve this objective, we first formulate the resource scheduling issue as a stochastic problem and then propose a Resource onlIne sCHeduling (RICH) algorithm using Lyapunov optimization technique to approach a time average profit that is close to the optimum with a diminishing gap (1/V) for MSP while still maintaining strong system stability and low congestion to guarantee the QoS for mobile users. With extensive simulations, we demonstrate that the profit of RICH algorithm is 3.3Ă— (18.4Ă—) higher than that of active (random) algorithm

    Carbon-Aware Load Balancing for Geo-distributed Cloud Services

    Full text link

    MultiGreen: Cost-Minimizing Multi-source Datacenter Power Supply with Online Control

    Get PDF
    Session 4: Data Center Energy ManagementFulltext of the conference paper in: http://conferences.sigcomm.org/eenergy/2013/papers/p13.pdfFaced by soaring power cost, large footprint of carbon emis- sion and unpredictable power outage, more and more mod- ern Cloud Service Providers (CSPs) begin to mitigate these challenges by equipping their Datacenter Power Supply Sys- tem (DPSS) with multiple sources: (1) smart grid with time- varying electricity prices, (2) uninterrupted power supply (UPS) of finite capacity, and (3) intermittent green or re- newable energy. It remains a significant challenge how to operate among multiple power supply sources in a comple- mentary manner, to deliver reliable energy to datacenter users over time, while minimizing a CSP’s operational cost over the long run. This paper proposes an efficient, online control algorithm for DPSS, called MultiGreen. MultiGreen is based on an innovative two-timescale Lyapunov optimiza- tion technique. Without requiring a priori knowledge of system statistics, MultiGreen allows CSPs to make online decisions on purchasing grid energy at two time scales (in the long-term market and in the real-time market), leveraging renewable energy, and opportunistically charging and dis- charging UPS, in order to fully leverage the available green energy and low electricity prices at times for minimum op- erational cost. Our detailed analysis and trace-driven sim- ulations based on one-month real-world data have demon- strated the optimality (in terms of the tradeoff between min- imization of DPSS operational cost and satisfaction of data- center availability) and stability (performance guarantee in cases of fluctuating energy demand and supply) of Multi- Green

    Profit-aware distributed online scheduling for data-oriented tasks in cloud datacenters

    Full text link
    As there is an increasing trend to deploy geographically distributed (geo-distributed) cloud datacenters (DCs), the scheduling of data-oriented tasks in such cloud DC systems becomes an appealing research topic. Specifically, it is challenging to achieve the distributed online scheduling that can handle the tasks\u27 acceptance, data-transfers, and processing jointly and efficiently. In this paper, by considering the store-and-forward and anycast schemes, we formulate an optimization problem to maximize the time-average profit from serving data-oriented tasks in a cloud DC system and then leverage the Lyapunov optimization techniques to propose an efficient scheduling algorithm, i.e., GlobalAny. We also extend the proposed algorithm by designing a data-transfer acceleration scheme to reduce the data-transfer latency. Extensive simulations verify that our algorithms can maximize the time-average profit in a distributed online manner. The results also indicate that GlobalAny and GlobalAnyExt (i.e., GlobalAny with data-transfer acceleration) outperform several existing algorithms in terms of both time-average profit and computation time

    Energy Saving in QoS Fog-supported Data Centers

    Get PDF
    One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency. The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications. The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection. The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces

    Analysis and optimization of resource control in high-speed railway wireless networks

    Get PDF
    This paper considers a joint optimal design of admission control and resource allocation for multimedia services delivery in highspeed railway (HSR) wireless networks. A stochastic network optimization problem is formulated which aims at maximizing the system utility while stabilizing all transmission queues under the average power constraint. By introducing virtual queues, the original problem is equivalently transformed into a queue stability problem, which can be naturally decomposed into three separate subproblems: utility maximization, admission control, and resource allocation. A threshold-based admission control strategy is proposed for the admission control subproblem. And a distributed resource allocation scheme is developed for the mixed-integer resource allocation subproblem with guaranteed global optimality. Then a dynamic admission control and resource allocation algorithm is proposed, which is suitable for distributed implementation. Finally, the performance of the proposed algorithm is evaluated by theoretical analysis and numerical simulations under realistic conditions of HSR wireless networks
    corecore