11,203 research outputs found

    Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections

    Get PDF
    The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections

    MobiThin management framework: design and evaluation

    Get PDF
    In thin client computing, applications are executed on centralized servers. User input (e.g. keystrokes) is sent to a remote server which processes the event and sends the audiovisual output back to the client. This enables execution of complex applications from thin devices. Adopting virtualization technologies on the thin client server brings several advantages, e.g. dedicated environments for each user and interesting facilities such as migration tools. In this paper, a mobile thin client service offered to a large number of mobile users is designed. Pervasive mobile thin client computing requires an intelligent service management to guarantee a high user experience. Due to the dynamic environment, the service management framework has to monitor the environment and intervene when necessary (e.g. adapt thin client protocol settings, move a session from one server to another). A detailed performance analysis of the implemented prototype is presented. It is shown that the prototype can handle up to 700 requests/s to start the mobile thin client service. The prototype can make a decision for up to 700 monitor reports per second

    On the Energy Efficiency of Virtual Machines’ Live Migration in Future Cloud Mobile Broadband Networks

    Get PDF
    In this chapter, a live migration of the virtual machine (VM) power consumption (PC) model is introduced. The model proposed an easy and parameterised method to evaluate the power cost of migrating the VMs from one server to another. This work is different from other research works found in the literature. It is not based on software, utilisation ratio or heuristic algorithms. Rather, it is based on converting and generalising the concepts of live migration process and experimental results from other works, which are based on the aforementioned tools. The resulting model eventually converts the power cost of live migration from a function of utilisation ratio to a function of server PC. This means there will be neither a need for additional hardware, a separate software, nor a heuristics-based algorithms to measure the utilisation. The resulting model is simple, on the fly and accurate PC evaluation. Furthermore, the latency cost of live migration process, included the time it take the VM to be completely transferred to the target server, alongside the link distance/delay between the two servers is discussed

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    Neural Network Prediction based Dynamic Resource Scheduling for Cloud System

    Get PDF
    Cloud computing is known as a internet based model for providing shared and on demand accessing of the resources (CPU, memory, processor, etc.). It is known as a dynamic service provider using very large scalable and virtualized resources over the Internet. With the help of cloud computing and virtualization technology, large number of online services can run over virtual machines (VMs), which in turn will reduce the number of physical servers. However, maintaining and managing the resources demand dynamically for these virtual machines with changing demand of resources while maintaining the service level agreement (SLA) is a challenging task for the cloud provider. Dynamic resource scheduling is a way to help manage the resource demand for virtual machines to handle variable workload without SLA violation. In this paper, we introduce Neural based prediction strategy to enable elastic scaling of resources for cloud systems. Unlike traditional static approach which do not consider the VM workload variability in account and dynamic approaches which sometimes predict under estimate of resources or over estimate of the resource, here we consider both workload fluctuations of VMs and prediction estimation problem into account. Neural based prediction strategy will first predict the VM resource demand based on Artificial Neural Network (ANN) model, to achieve resource allocation for cloud applications on each VM. Once the prediction is done, we than apply dynamic resource scheduling to consolidate the virtual machines with adaptive resource allocation, to reduce the number of active physical server while satisfying the SLA
    • …
    corecore