12 research outputs found

    Maximizing hypervisor scalability using minimal virtual machines

    Get PDF
    The smallest instance offered by Amazon EC2 comes with 615MB memory and a 7.9GB disk image. While small by today's standards, embedded web servers with memory footprints well under 100kB, indicate that there is much to be saved. In this work we investigate how large VM-populations the open Stack hyper visor can be made to sustain, by tuning it for scalability and minimizing virtual machine images. Request-driven Qemu images of 512 byte are written in assembly, and more than 110 000 such instances are successfully booted on a 48 core host, before memory is exhausted. Other factors are shown to dramatically improve scalability, to the point where 10 000 virtual machines consume no more than 2.06% of the hyper visor CPU

    A distributed approach to dynamic vm management

    Get PDF
    Abstract-Computing today is increasingly moving into largescale virtualized data centres, offering computing resources in the form of virtual machines (VMs) on a pay-per-usage basis. In order to minimize costs, VMs should be consolidated on as few physical machines (PMs) as possible, switching idle PMs into a power saving mode. It may be necessary to dynamically allocate and reallocate VMs to PMs in order to meet highly dynamic VM resource requirements. The problem of assigning VMs to PMs is known to be NP-Hard. Most solutions focus on a centralized approach, with a single management node making allocation decisions periodically. This approach suffers from poor scalability and the existence of a single point of failure. We present a fully distributed approach to dynamic VM management, and evaluate our approach using a simulation tool. Results indicate that the distributed approach can achieve similar performance to the centralized solution, while eliminating the single point of failure and reducing the network bandwidth required for management

    Evolutionary computing based QoS oriented energy efficient VM consolidation scheme for large scale cloud data centers using random work load bench

    Get PDF
    In order to assess the performance of an approach, it is unavoidable to inspect the performance with distinct datasets with diverse characteristics. In this paper we had assessed the system performance with random workbench datasets. A-GA (Adaptive Genetic Algorithm) based consolidation technique has been compared with other consolidation techniques including dynamic CPU utilization techniques, VM (Virtual Machine) selection and placement policies. The proposed consolidation system had exhibited better results in terms of energy conservation, minimal Service Level Agreement (SLA) violation and Quality of Service (QoS) assurance

    Low SLA violation and Low Energy consumption using VM Consolidation in Green Cloud Data Centers

    Get PDF
    Virtual Machines (VM) consolidation is an efficient way towards energy conservation in cloud data centers. The VM consolidation technique is applied to migrate VMs into lesser number of active Physical Machines (PMs), so that the PMs which have no VMs can be turned into sleep state. VM consolidation technique can reduce energy consumption of cloud data centers because of the energy consumption by the PM which is in sleep state. Because of VMs sharing the underlying physical resources, aggressive consolidation of VMs can lead to performance degradation. Furthermore, an application may encounter an unexpected resources requirement which may lead to increased response times or even failures. Before providing cloud services, cloud providers should sign Service Level Agreements (SLA) with customers. To provide reliable Quality of Service (QoS) for cloud providers is quite important of considering this research topic. To strike a tradeoff between energy and performance, minimizing energy consumption on the premise of meeting SLA is considered. One of the optimization challenges is to decide which VMs to migrate, when to migrate, where to migrate, and when and which servers to turn on/off. To achieve this goal optimally, it is important to predict the future host state accurately and make plan for migration of VMs based on the prediction. For example, if a host will be overloaded at next time unit, some VMs should be migrated from the host to keep the host from overloading, and if a host will be underloaded at next time unit, all VMs should be migrated from the host, so that the host can be turned off to save power. The design goal of the controller is to achieve the balance between server energy consumption and application performance. Because of the heterogeneity of cloud resources and various applications in the cloud environment, the workload on hosts is dynamically changing over time. It is essential to develop accurate workload prediction models for effective resource management and allocation. The disadvantage of VM consolidation process in cloud data centers is that they only concentrate on primitive system characteristics such as CPU utilization, memory and the number of active hosts. When originating their models and approaches as the decisive factors, these characteristics ignore the discrepancy in performance-to-power efficiency between heterogeneous infrastructures. Therefore, this is the reason that leads to unreasonable consolidation which may cause redundant number of VM migrations and energy waste. Advance artificial intelligence such as reinforcement learning can learn a management strategy without prior knowledge, which enables us to design a model-free resource allocation control system. For example, VM consolidation could be predicted by using artificial intelligence rather than based on the current resources utilization usag

    Vers une gestion coopérative des infrastructures virtualisées à large échelle (le cas de l'ordonnancement)

    Get PDF
    Les besoins croissants en puissance de calcul sont généralement satisfaits en fédérant de plus en plus d ordinateurs (ou noeuds) pour former des infrastructures distribuées. La tendance actuelle est d utiliser la virtualisation système dans ces infrastructures, afin de découpler les logiciels des noeuds sous-jacents en les encapsulant dans des machines virtuelles. Pour gérer efficacement ces infrastructures virtualisées, de nouveaux gestionnaires logiciels ont été mis en place. Ces gestionnaires sont pour la plupart hautement centralisés (les tâches de gestion sont effectuées par un nombre restreint de nœuds dédiés). Cela limite leur capacité à passer à l échelle, autrement dit à gérer de manière réactive des infrastructures de grande taille, qui sont de plus en plus courantes. Au cours de cette thèse, nous nous sommes intéressés aux façons d améliorer cet aspect ; l une d entre elles consiste à décentraliser le traitement des tâches de gestion, lorsque cela s avère judicieux. Notre réflexion s est concentrée plus particulièrement sur l ordonnancement dynamique des machines virtuelles, pour donner naissance à la proposition DVMS (Distributed Virtual Machine Scheduler). Nous avons mis en œuvre un prototype, que nous avons validé au travers de simulations (notamment via l outil SimGrid), et d expériences sur le banc de test Grid 5000. Nous avons pu constater que DVMS se montrait particulièrement réactif pour gérer des infrastructures virtualisées constituées de dizaines de milliers de machines virtuelles réparties sur des milliers de nœuds. Nous nous sommes ensuite penchés sur les perspectives d extension et d amélioration de DVMS. L objectif est de disposer à terme d un gestionnaire décentralisé complet, objectif qui devrait être atteint au travers de l initiative Discovery qui fait suite à ces travaux.The increasing need in computing power has been satisfied by federating more and more computers (called nodes) to build the so-called distributed infrastructures. Over the past few years, system virtualization has been introduced in these infrastructures (the software is decoupled from the hardware by packaging it in virtual machines), which has lead to the development of software managers in charge of operating these virtualized infrastructures. Most of these managers are highly centralized (management tasks are performed by a restricted set of dedicated nodes). As established, this restricts the scalability of managers, in other words their ability to be reactive to manage large-scale infrastructures, that are more and more common. During this Ph.D., we studied how to mitigate these concerns ; one solution is to decentralize the processing of management tasks, when appropriate. Our work focused in particular on the dynamic scheduling of virtual machines, resulting in the DVMS (Distributed Virtual Machine Scheduler) proposal. We implemented a prototype, that was validated by means of simulations (especially with the SimGrid tool) and with experiments on the Grid 5000 test bed. We observed that DVMS was very reactive to schedule tens of thousands of virtual machines distributed over thousands of nodes. We then took an interest in the perspectives to improve and extend DVMS. The final goal is to build a full decentralized manager. This goal should be reached by the Discovery initiative,that will leverage this work.NANTES-ENS Mines (441092314) / SudocSudocFranceF

    MAGNETIC: Multi-Agent Machine Learning-Based Approach for Energy Efficient Dynamic Consolidation in Data Centers

    Get PDF
    Improving the energy efficiency of data centers while guaranteeing Quality of Service (QoS), together with detecting performance variability of servers caused by either hardware or software failures, are two of the major challenges for efficient resource management of large-scale cloud infrastructures. Previous works in the area of dynamic Virtual Machine (VM) consolidation are mostly focused on addressing the energy challenge, but fall short in proposing comprehensive, scalable, and low-overhead approaches that jointly tackle energy efficiency and performance variability. Moreover, they usually assume over-simplistic power models, and fail to accurately consider all the delay and power costs associated with VM migration and host power mode transition. These assumptions are no longer valid in modern servers executing heterogeneous workloads and lead to unrealistic or inefficient results. In this paper, we propose a centralized-distributed low-overhead failure-aware dynamic VM consolidation strategy to minimize energy consumption in large-scale data centers. Our approach selects the most adequate power mode and frequency of each host during runtime using a distributed multi-agent Machine Learning (ML) based strategy, and migrates the VMs accordingly using a centralized heuristic. Our Multi-AGent machine learNing-based approach for Energy efficienT dynamIc Consolidation (MAGNETIC) is implemented in a modified version of the CloudSim simulator, and considers the energy and delay overheads associated with host power mode transition and VM migration, and is evaluated using power traces collected from various workloads running in real servers and resource utilization logs from cloud data center infrastructures. Results show how our strategy reduces data center energy consumption by up to 15% compared to other works in the state-of-the-art (SoA), guaranteeing the same QoS and reducing the number of VM migrations and host power mode transitions by up to 86% and 90%, respectively. Moreover, it shows better scalability than all other approaches, taking less than 0.7% time overhead to execute for a data center with 1500 VMs. Finally, our solution is capable of detecting host performance variability due to failures, automatically migrating VMs from failing hosts and draining them from workload

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast

    A case for fully decentralized dynamic VM consolidation in clouds

    No full text
    Best paper awardInternational audienceOne way to conserve energy in cloud data centers is to transition idle servers into a power saving state during periods of low utilization. Dynamic virtual machine (VM) consolidation (VMC) algorithms are proposed to create idle times by periodically repacking VMs on the least number of physical machines (PMs). Existing works mostly apply VMC on top of centralized, hierarchical, or ring-based system topologies which result in poor scalability and/or packing efficiency with increasing number of PMs and VMs. In this paper, we propose a novel fully decentralized dynamic VMC schema based on an unstructured peer-to-peer (P2P) network of PMs. The proposed schema is validated using three well known VMC algorithms: First-Fit Decreasing (FFD), Sercon, V-MAN, and a novel migration-cost aware ACO-based algorithm. Extensive experiments performed on the Grid'5000 testbed show that once integrated in our fully decentralized VMC schema, traditional VMC algorithms achieve a global packing efficiency very close to a centralized system. Moreover, the system remains scalable with increasing number of PMs and VMs. Finally, the migration-cost aware ACO-based algorithm outperforms FFD and Sercon in the number of released PMs and requires less migrations than FFD and V-MAN
    corecore