16,759 research outputs found

    An overview of virtual machine live migration techniques

    Get PDF
    In a cloud computing the live migration of virtual machines shows a process of moving a running virtual machine from source physical machine to the destination, considering the CPU, memory, network, and storage states. Various performance metrics are tackled such as, downtime, total migration time, performance degradation, and amount of migrated data, which are affected when a virtual machine is migrated. This paper presents an overview and understanding of virtual machine live migration techniques, of the different works in literature that consider this issue, which might impact the work of professionals and researchers to further explore the challenges and provide optimal solutions

    A Survey of Virtual Machine Migration Techniques in Cloud Computing

    Get PDF
    Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre-copy and post-copy approach. The process to move running applications or VMs from one physical machine to another, is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Keywords: Cloud Computing, Virtualization, Virtual Machine, Live Virtual Machine Migration.

    Distributed Shared Memory based Live VM Migration

    Get PDF
    Cloud computing is the new trend in computing services and IT industry, this computing paradigm has numerous benefits to utilize IT infrastructure resources and reduce services cost. The key feature of cloud computing depends on mobility and scalability of the computing resources, by managing virtual machines. The virtualization decouples the software from the hardware and manages the software and hardware resources in an easy way without interruption of services. Live virtual machine migration is an essential tool for dynamic resource management in current data centers. Live virtual machine is defined as the process of moving a running virtual machine or application between different physical machines without disconnecting the client or application. Many techniques have been developed to achieve this goal based on several metrics (total migration time, downtime, size of data sent and application performance) that are used to measure the performance of live migration. These metrics measure the quality of the VM services that clients care about, because the main goal of clients is keeping the applications performance with minimum service interruption. The pre-copy live VM migration is done in four phases: preparation, iterative migration, stop and copy, and resume and commitment. During the preparation phase, the source and destination physical servers are selected, the resources in destination physical server are reserved, and the critical VM is selected to be migrated. The cloud manager responsibility is to make all of these decisions. VM state migration takes place and memory state is transferred to the target node during iterative migration phase. Meanwhile, the migrated VM continues to execute and dirties its memory. In the stop and copy phase, VM virtual CPU is stopped and then the processor and network states are transferred to the destination host. Service downtime results from stopping VM execution and moving the VM CPU and network states. Finally in the resume and commitment phase, the migrated VM is resumed running in the destination physical host, the remaining memory pages are pulled by destination machine from the source machine. The source machine resources are released and eliminated. In this thesis, pre-copy live VM migration using Distributed Shared Memory (DSM) computing model is proposed. The setup is built using two identical computation nodes to construct all the proposed environment services architecture namely the virtualization infrastructure (Xenserver6.2 hypervisor), the shared storage server (the network file system), and the DSM and High Performance Computing (HPC) cluster. The custom DSM framework is based on a low latency memory update named Grappa. Moreover, HPC cluster is used to parallelize the work load by using CPUs computation nodes. HPC cluster employs OPENMPI and MPI libraries to support parallelization and auto-parallelization. The DSM allows the cluster CPUs to access the same memory space pages resulting in less memory data updates, which reduces the amount of data transferred through the network. The thesis proposed model achieves a good enhancement of the live VM migration metrics. Downtime is reduced by 50 % in the idle workload of Windows VM and 66.6% in case of Ubuntu Linux idle workload. In general, the proposed model not only reduces the downtime and the total amount of data sent, but also does not degrade other metrics like the total migration time and the applications performance

    Machine Learning Models for Live Migration Metrics Prediction

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. Egger, Bernhard.์˜ค๋Š˜๋‚  ๋ฐ์ดํ„ฐ ์„ผํ„ฐ์—์„œ ๊ฐ€์ƒ๋จธ์‹ ์˜ ๋ผ์ด๋ธŒ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๊ธฐ์ˆ ์€ ๋งค์šฐ ์ค‘์š”ํ•˜๊ฒŒ ์‚ฌ์šฉ๋œ๋‹ค. ํ˜„์กดํ•˜๋Š” ๋ฐ์ดํ„ฐ ์„ผํ„ฐ ๊ด€๋ฆฌ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ๋Š” ๋ณต์žกํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•˜์—ฌ ์–ธ์ œ, ์–ด๋””์„œ, ์–ด๋””๋กœ ๊ฐ€์ƒ๋จธ์‹ ์˜ ๋งˆ์ด๊ทธ๋ ˆ์…˜์„ ์‹คํ–‰ํ• ์ง€๋ฅผ ๊ฒฐ์ •ํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์–ด๋–ค ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋Š”์ง€์— ๋”ฐ๋ผ์„œ ์„ฑ๋Šฅ์ด ํฌ๊ฒŒ ์ฐจ์ด๊ฐ€ ๋‚  ์ˆ˜ ์žˆ์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์ด์— ๋Œ€ํ•œ ๋…ผ์˜๋Š” ์ฃผ์š”ํ•˜๊ฒŒ ๋‹ค๋ค„์ง€์ง€ ์•Š์•˜๋‹ค. ์ด๋Ÿฌํ•œ ์„ฑ๋Šฅ์˜ ์ฐจ์ด๋Š” ๋ผ์ด๋ธŒ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ฐจ์ด๋‚˜ ๊ฐ€์ƒ๋จธ์‹ ์— ํ• ๋‹น๋œ ์›Œํฌ๋กœ๋“œ์˜ ์–‘์˜ ์ฐจ์ด ๊ทธ๋ฆฌ๊ณ  ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜์„ ํ•˜๋Š” ๊ณณ๊ณผ ๋ชฉ์  host์˜ ์ƒํƒœ ์ฐจ์ด์— ์˜ํ•˜์—ฌ ์ผ์–ด๋‚œ๋‹ค. ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ์˜ฌ๋ฐ”๋ฅธ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๋ฐฉ๋ฒ•์„ ์ •ํ•˜๋Š” ๊ฒƒ์€ ํ•„์ˆ˜์ ์ธ ๊ณผ์ œ์ด๋‹ค. ์ด๋Ÿฌํ•œ ๊ณผ์ œ๋ฅผ performance model์„ ์ด์šฉํ•˜์—ฌ ํ•ด๊ฒฐํ•  ๊ฒƒ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š”, ๊ฐ€์ƒ๋จธ์‹ ์˜ ๋ผ์ด๋ธŒ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์„ฑ๋Šฅ์„ ์˜ˆ์ธกํ•˜๋Š” ์—ฌ๋Ÿฌ ๋จธ์‹  ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ์ œ์‹œํ•œ๋‹ค. ์—ฌ๊ธฐ์„œ 12๊ฐœ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•ด 7๊ฐ€์ง€์˜ ๋‹ค๋ฅธ metric๋“ค์„ ์˜ˆ์ธกํ•œ๋‹ค. ์ด ๋ชจ๋ธ์€ ๊ธฐ์กด ์—ฐ๊ตฌ์— ๋น„ํ•ด ํ›จ์”ฌ ์ •ํ™•ํ•œ ์˜ˆ์ธก์„ ์„ฑ๊ณตํ•˜์˜€๋‹ค. ๊ฐ๊ฐ์˜ target metric๊ณผ ์—ฌ๋Ÿฌ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์— ๋Œ€ํ•˜์—ฌ input feature evaluation์„ ์ˆ˜ํ–‰ํ•˜์˜€๊ณ  ๊ฐ๊ฐ์˜ ํŠน์„ฑ์— ๋งž๋Š” ๋ชจ๋ธ์„ ๋งŒ๋“ค์–ด 84๊ฐœ์˜ ์„œ๋กœ๋‹ค๋ฅธ ๋จธ์‹  ๋Ÿฌ๋‹ ๋ชจ๋ธ๋“ค์„ ํ›ˆ๋ จ์‹œ์ผฐ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ๋“ค์€ ์‹ค์ œ ๋ผ์ด๋ธŒ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ํ”„๋ ˆ์ž„์›Œํฌ์— ์‰ฝ๊ฒŒ ์ ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ฐ๊ฐ์˜ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•˜์—ฌ target metric ์˜ˆ์ธก์„ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ์˜ฌ๋ฐ”๋ฅธ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‰ฝ๊ฒŒ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ๊ณ  ์ด๋Š” ๊ฒฐ๊ณผ์ ์œผ๋กœ ๋‹ค์šดํƒ€์ž„๊ณผ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜์— ์†Œ์š”๋˜๋Š” ์ด ์‹œ๊ฐ„์˜ ๊ฐ์†Œ ํšจ๊ณผ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.Live migration of Virtual Machines (VMs) is an important technique in today's data centers. In existing data center management frameworks, complex algorithms are used to determine when, where, and to which host a migration of a VM is to be performed. However, very little attention is paid to the selection of the right migration technique depending on which the migration performance can vary greatly. This performance fluctuation is caused by the different live migration algorithms, the different workloads that each VM is executing, and the state of the destination and the source host. Choosing the right migration technique is a crucial task that has to be made quickly and precisely. Therefore, a performance model is the best and the right candidate for such a task. In this thesis, we propose various machine learning models for predicting live migration metrics of virtual machines. We predict seven different metrics for twelve distinct migration algorithms. Our models achieve a much higher accuracy compared to existing work. For each target metric and algorithm, an input feature evaluation is conducted and a strictly specific model is generated, leading to 84 different trained machine learning models. These models can easily be integrated into a live migration framework. Using the target metric predictions for each migration algorithm, a framework can easily choose the right migration algorithm, which can lead to downtime and total migration time reduction and less service-level agreement violations.Abstract Contents List of Figures List of Tables Chapter 1 Introduction and Motivation Chapter 2 Background 2.1 Virtualization 2.2 Live Migration 2.3 SLA and SLO 2.4 Live Migration Techniques 2.4.1 Pre-copy (PRE) 2.4.2 Post-copy (POST) 2.4.3 Hybrid Migration Techniques 2.5 Live Migration Performance Metrics 2.6 Artificial Neural Networks 2.6.1 Feedforward Neural Network (FNN) 2.6.2 Deep Neural Network (DNN) 2.6.3 Convolution Neural Network (CNN) Chapter 3 Related Work Chapter 4 Overview and Design Chapter 5 Implementation 5.1 Deep Neural Network design 5.2 Convolutional Neural Network design Chapter 6 Evaluation metrics 6.1 Geometric Mean Absolute Error (GMAE) 6.2 Geometric Mean Relative Error (GMRE) 6.3 Mean Absolute Error (MAE) 6.4 Weighted Absolute Percentage Error (WAPE) Chapter 7 Results 7.1 Deep Neural Network 7.2 SVR with bagging 7.3 DNN vs. SVR comparison 7.4 Overhead Chapter 8 Conclusion and Future Work 8.1 Conclusion 8.2 Future Work AppendicesMaste

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Energy-aware dynamic virtual machine consolidation for cloud datacenters

    Get PDF
    • โ€ฆ
    corecore