26 research outputs found

    An overview of virtual machine live migration techniques

    Get PDF
    In a cloud computing the live migration of virtual machines shows a process of moving a running virtual machine from source physical machine to the destination, considering the CPU, memory, network, and storage states. Various performance metrics are tackled such as, downtime, total migration time, performance degradation, and amount of migrated data, which are affected when a virtual machine is migrated. This paper presents an overview and understanding of virtual machine live migration techniques, of the different works in literature that consider this issue, which might impact the work of professionals and researchers to further explore the challenges and provide optimal solutions

    An Approach to Improve the Live Migration Using Asynchronized Cache and Prioritized IP Packets

    Get PDF
    The live migration of a virtual machine is a method of moving virtual machines across hosts within a virtualized data center. Two main parameters should be considered for evaluation of live migration; total duration, and downtime of migration. This paper focuses on optimization of live migration in Xen environment where memory pages are dirtied rapidly. An approach is proposed to manage dirty pages during migration in the cache and prioritize the packets at the network level. According to the evaluations, when the system is under heavy workload or it is running within a stress tool, the virtual machines are intensively writing. The proposed approach outperforms the default method in terms of number of transferred pages, total migration time, and downtime. Experimental results showed that by increasing workload, the proposed approach reduced the number of sent pages by 47.4%, total migration time by 10%, and the downtime by 27.7% in live migration

    S-memV: Split Migration of Large-Memory Virtual Machines in IaaS Clouds

    Get PDF
    Recently, Infrastructure-as-a-Service clouds provide virtual machines (VMs) with a large amount of memory. Such large-memory VMs make VM migration difficult because it is costly to reserve large-memory hosts as the destination. Using virtual memory is a remedy for this problem, but virtual memory is incompatible with the memory access pattern in VM migration. Consequently, large performance degradation occurs during and after VM migration due to excessive paging. This paper proposes split migration of large-memory VMs with S-memV. Split migration migrates a VM to one main host and one or more sub-hosts. It divides the memory of a VM and transfers memory likely to be accessed to the main host. Since it transfers the rest of the memory directly to the sub-hosts, no paging occurs during VM migration. After split migration, remote paging is performed between the main host and the sub-hosts, but its frequency is lower thanks to memory splitting that is aware of remote paging. We have implemented S-memV in KVM and showed that the performance of split migration and application performance after VM migration were comparable to that of traditional VM migration with sufficient memory.IEEE International Conference on Cloud Computing (IEEE Cloud 2018), July 2-7, 2018, San Francisco, CA, US

    Optimizing Resource allocation while handling SLA violations in Cloud Computing platforms

    Get PDF
    International audienceIn this paper we study a resource allocation problem in the context of Cloud Computing, where a set of Virtual Machines (VM) has to be placed on a set of Physical Machines (PM). Each VM has a given demand (e.g. CPU demand), and each PM has a capacity. However, each VM only uses a fraction of its demand. The aim is to exploit the difference between the demand of the VM and its real utilization of the resources, to exploit the capacities of the PMs as much as possible. Moreover, the real consumption of the VMs can change over time (while staying under its original demand), implying sometimes expensive ''SLA violations'', corresponding to some VM's consumption not satisfied because of overloaded PMs. Thus, while optimizing the global resource utilization of the PMs, it is necessary to ensure that at any moment a VM's need evolves, a few number of migrations (moving a VM from PM to PM) is sufficient to find a new configuration in which all the VMs' consumptions are satisfied. We modelize this problem using a fully dynamic bin packing approach and we present an algorithm ensuring a global utilization of the resources of 66%. Moreover, each time a PM is overloaded at most one migration is necessary to fall back in a configuration with no overloaded PM, and only 3 different PMs are concerned by required migrations that may occur to keep the global resource utilization correct. This allows the platform to be highly resilient to a great number of changes

    PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centers

    Get PDF
    Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%

    Post-Copy Based Live Virtual Machine Migration Using Adaptive Pre-Paging and Dynamic Self-Ballooning

    No full text
    We present the design, implementation, and evaluation of post-copy based live migration for virtual machines (VMs) across a Gigabit LAN. Live migration is an indispensable feature in today’s virtualization technologies. Post-copy migration defers the transfer of a VM’s memory contents until after its processor state has been sent to the target host. This deferral is in contrast to the traditional pre-copy approach, which first copies the memory state over multiple iterations followed by a final transfer of the processor state. The post-copy strategy can provide a “win-win ” by reducing total migration time closer to its equivalent time achieved by non-live VM migration. This is done while maintaining the liveness benefits of the pre-copy approach. We compare post-copy extensively against the traditional pre-copy approach on top of the Xen Hypervisor. Using a range of VM workloads we show improvements in several migration metrics including pages transferred, total migration time and network overhead. We facilitate the use of post-copy with adaptive pre-paging in order to eliminate all duplicate page transmissions. Our implementation is able to reduce the number of network-bound page faults to within 21 % of the VM’s working set for large workloads. Finally, we eliminate the transfer of free memory pages in both migration schemes through a dynamic self-ballooning (DSB) mechanism. DSB periodically releases free pages in a guest VM back to the hypervisor and significantly speeds up migration with negligible performance degradation
    corecore