8,412 research outputs found

    Adaptive live VM migration over a WAN: modeling and implementation

    Get PDF
    Recent advances in virtualization technology enable high mobility of virtual machines and resource provisioning at the data-center level. To streamline the migration process, various migration strategies have been proposed for VM live migration over a local-area network (LAN). The most common solution uses memory pre-copying and assumes the storage is shared on the LAN. While applied to a wide-area network (WAN), the VM live migration algorithms need a new design philosophy to address the challenges of long latency, limited bandwidth, unstable network conditions and the movement of storage. This paper proposes a three-phase fractional hybrid pre-copy and post-copy solution for both memory and storage to achieve highly adaptive migration over a WAN. In this hybrid solution, we selectively migrate an important fraction of memory and storage in the pre-copy and freeze-and-copy phase, while the rest (non-critical data set) is migrated during post-copying. We propose a new metric called performance restoration agility, which considers both the downtime and the VM speed degradation during the post-copy phase, to evaluate the migration process. We also develop a profiling framework and a novel probabilistic prediction model to adaptively find a predictably optimal combination of the memory and storage fractions to migrate. This model-based hybrid solution is implemented on Xen and evaluated in an emulated WAN environment. Experimental results show that our solution wins over all others in adaptiveness for various applications over a WAN, while retaining the responsiveness of post-copy algorithms.published_or_final_versio

    Network Aware VM Migration using Community Recognition

    Get PDF
    Cloud Computing is a powerful concept buzzing these days in industry by which we can avail resources as and when we require like electricity and where required softwares and information are provided based on demand. It enables us with large computing power with low-cost and hence removes the hassle of storing and maintaining servers locally. It can be basically divided into three of business i.e. Software as a Service, Platform as a Service and Infrastructure as a Service which helps to transfer service to end user very efficiently. VM placement belongs to the model of the Infrastructure as a Service. Basically it means that all applications have a certain need of computing power, memory storage, network bandwidth, and some power consumption to function which is abstracted as a Virtual Machine and provided by the Data Centers. Virtual Machine Migration is the method of transferring the VMs to the Physical Machines in such a way that there is efficient usage of energy, network bandwidth, etc. I have proposed a new network aware VM Migration scheme using Community Recognition which shows the candidates for migration and takes into account all other factors like energy, migration criteria, etc

    Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections

    Get PDF
    The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections

    Green Cloud - Load Balancing, Load Consolidation using VM Migration

    Get PDF
    Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly. Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers

    Interoperability in the Heterogeneous Cloud Environment: A Survey of Recent User-centric Approaches

    Get PDF
    © 2016 Copyright held by the owner/author(s). Cloud computing provides users the ability to access shared, online computing resources. However, providers often offer their own proprietary applications, interfaces, APIs and infrastructures, resulting in a heterogeneous cloud environment. This heterogeneous environment makes it difficult for users to change cloud service providers; exploring capabilities to support the automated migration from one provider to another is an active, open research area. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing approaches to reduce the impact of vendor lock-in by investigating the cloud migration problem at the level of the VM. However, the migration downtime, decoupling VM from underlying systems and security of live channels remain open issues. This paper focuses on analysing recently proposed live, cloud migration approaches for VMs at the infrastructure level in the cloud architecture. The analysis reveals issues with flexibility, performance, and security of the approaches, including additional loads to the CPU and disk I/O drivers of the physical machine where the VM initially resides. The next steps of this research are to develop and evaluate a new approach LibZam (Libya Zamzem) that will work towards addressing the identified limitations

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    ON OPTIMIZATIONS OF VIRTUAL MACHINE LIVE STORAGE MIGRATION FOR THE CLOUD

    Get PDF
    Virtual Machine (VM) live storage migration is widely performed in the data cen- ters of the Cloud, for the purposes of load balance, reliability, availability, hardware maintenance and system upgrade. It entails moving all the state information of the VM being migrated, including memory state, network state and storage state, from one physical server to another within the same data center or across different data centers. To minimize its performance impact, this migration process is required to be transparent to applications running within the migrating VM, meaning that ap- plications will keep running inside the VM as if there were no migration operations at all. In this dissertation, a thorough literature review is conducted to provide a big picture of the VM live storage migration process, its problems and existing solutions. After an in-depth examination, we observe that a severe IO interference between the VM IO threads and migration IO threads exists and causes both types of the IO threads to suffer from performance degradation. This interference stems from the fact that both types of IO threads share the same critical IO path by reading from and writing to the same shared storage system. Owing to IO resource contention and requests interference between the two different types of IO requests, not only will the IO request queue lengthens in the storage system, but the time-consuming disk seek operations will also become more frequent. Based on this fundamental observation, this dissertation research presents three related but orthogonal solutions that tackle the IO interference problem in order to improve the VM live storage migration performance. First, we introduce the Workload-Aware IO Outsourcing scheme, called WAIO, to improve the VM live storage migration efficiency. Second, we address this problem by proposing a novel scheme, called SnapMig, to improve the VM live storage migration efficiency and eliminate its performance impact on user applications at the source server by effectively leveraging the existing VM snapshots in the backup servers. Third, we propose the IOFollow scheme to improve both the VM performance and migration performance simultaneously. Finally, we outline the direction for the future research work. Advisor: Hong Jian
    corecore