167 research outputs found

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    Energy and Performance Management of Virtual Machines: Provisioning, Placement and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. Ho- wever, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations con- cerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utili- zation under workload independent quality of service constraints. These ap- proaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performan- ce degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth con- tribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consump- tion, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) archi- tecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of ser- vers with energy efficiency. Our sixth contribution is a Utilization Prediction- aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scala- bility, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource mana- gement by dynamically adjusting the utilization thresholds for each server in data centers.  </div

    Heuristic Algorithms for Energy and Performance Dynamic Optimization in Cloud Computing

    Get PDF
    Cloud computing becomes increasingly popular for hosting all kinds of applications not only due to their ability to support dynamic provisioning of virtualized resources to handle workload fluctuations but also because of the usage based on pricing. This results in the adoption of data centers which store, process and present the data in a seamless, efficient and easy way. Furthermore, it also consumes an enormous amount of electrical energy, then leads to high using cost and carbon dioxide emission. Therefore, we need a Green computing solution that can not only minimize the using costs and reduce the environment impact but also improve the performance. Dynamic consolidation of Virtual Machines (VMs), using live migration of the VMs and switching idle servers to sleep mode or shutdown, optimizes the energy consumption. We propose an adaptive underloading detection method of hosts, VMs migration selecting method and heuristic algorithm for dynamic consolidation of VMs based on the analysis of the historical data. Through extensive simulation based on random data and real workload data, we show that our method and algorithm observably reduce energy consumption and allow the system to meet the Service Level Agreements (SLAs)

    Towards green computing in wireless sensor networks: controlled mobility-aided balanced tree approach

    Get PDF
    Virtualization technology has revolutionized the mobile network and widely used in 5G innovation. It is a way of computing that allows dynamic leasing of server capabilities in the form of services like SaaS, PaaS, and IaaS. The proliferation of these services among the users led to the establishment of large-scale cloud data centers that consume an enormous amount of electrical energy and results into high metered bill cost and carbon footprint. In this paper, we propose three heuristic models namely Median Migration Time (MeMT), Smallest Void Detection (SVD) and Maximum Fill (MF) that can reduce energy consumption with minimal variation in SLAs negotiated. Specifically, we derive the cost of running cloud data center, cost optimization problem and resource utilization optimization problem. Power consumption model is developed for cloud computing environment focusing on liner relationship between power consumption and resource utilization. A virtual machine migration technique is considered focusing on synchronization oriented shorter stop-and-copy phase. The complete operational steps as algorithms are developed for energy aware heuristic models including MeMT, SVD and MF. To evaluate proposed heuristic models, we conduct experimentations using PlanetLab server data often ten days and synthetic workload data collected randomly from the similar number of VMs employed in PlanetLab Servers. Through evaluation process, we deduce that proposed approaches can significantly reduce the energy consumption, total VM migration, and host shutdown while maintaining the high system performance

    Energy-efficient resource allocation scheme based on enhanced flower pollination algorithm for cloud computing data center

    Get PDF
    Cloud Computing (CC) has rapidly emerged as a successful paradigm for providing ICT infrastructure. Efficient and environmental-friendly resource allocation mechanisms, responsible for allocatinpg Cloud data center resources to execute user applications in the form of requests are undoubtedly required. One of the promising Nature-Inspired techniques for addressing virtualization, consolidation and energyaware problems is the Flower Pollination Algorithm (FPA). However, FPA suffers from entrapment and its static control parameters cannot maintain a balance between local and global search which could also lead to high energy consumption and inadequate resource utilization. This research developed an enhanced FPA-based energy efficient resource allocation scheme for Cloud data center which provides efficient resource utilization and energy efficiency with less probable Service Level Agreement (SLA) violations. Firstly, an Enhanced Flower Pollination Algorithm for Energy-Efficient Virtual Machine Placement (EFPA-EEVMP) was developed. In this algorithm, a Dynamic Switching Probability (DSP) strategy was adopted to balance the local and global search space in FPA used to minimize the energy consumption and maximize resource utilization. Secondly, Multi-Objective Hybrid Flower Pollination Resource Consolidation (MOH-FPRC) algorithm was developed. In this algorithm, Local Neighborhood Search (LNS) and Pareto optimisation strategies were combined with Clustering algorithm to avoid local trapping and address Cloud service providers conflicting objectives such as energy consumption and SLA violation. Lastly, Energy-Aware Multi-Cloud Flower Pollination Optimization (EAM-FPO) scheme was developed for distributed Multi-Cloud data center environment. In this scheme, Power Usage Effectiveness (PUE) and migration controller were utilised to obtain the optimal solution in a larger search space of the CC environment. The scheme was tested on MultiRecCloudSim simulator. Results of the simulation were compared with OEMACS, ACS-VMC, and EA-DP. The scheme produced outstanding performance improvement rate on the data center energy consumption by 20.5%, resource utilization by 23.9%, and SLA violation by 13.5%. The combined algorithms have reduced entrapment and maintaned balance between local and global search. Therefore, based on the findings the developed scheme has proven to be efficient in minimizing energy consumption while at the same time improving the data center resource allocation with minimum SLA violation

    Anti load-balancing for energy-aware distributed scheduling of virtual machines

    Get PDF
    La multiplication de l'informatique en nuage (Cloud) a abouti à la création de centres de données dans le monde entier. Le Cloud contient des milliers de nœuds de calcul. Cependant, les centres de données consomment d'énorme quantités d'énergie à travers le monde estimées à plus de 1,5 % de la consommation mondiale d'électricité et devrait continuer à croître. Une problématique habituellement étudiée dans les systèmes distribués est de répartir équitablement la charge. Mais lorsque l'objectif est de réduire la consommation électrique, ce type d'algorithmes peut mener à avoir des serveurs fortement sous chargés et donc à consommer de l'énergie inutilement. Cette thèse présente de nouvelles techniques, des algorithmes et des logiciels pour la consolidation dynamique et distribuée de machines virtuelles (VM) dans le Cloud. L'objectif principal de cette thèse est de proposer des stratégies d'ordonnancement tenant compte de l'énergie dans le Cloud pour les économies d'énergie. Pour atteindre cet objectif, nous utilisons des approches centralisées et décentralisées. Les contributions à ce niveau méthodologique sont présentées sur ces deux axes. L'objectif de notre démarche est de réduire la consommation de l'énergie totale du centre de données en contrôlant la consommation globale d'énergie des applications tout en assurant les contrats de service pour l'exécution des applications. La consommation d'énergie est réduite en désactivant et réactivant dynamiquement les nœuds physiques pour répondre à la demande des ressources. Les principales contributions sont les suivantes: - Ici on s'intéressera à la problématique contraire de l'équilibrage de charge. Il s'agit d'une technique appelée Anti Load-Balancing pour concentrer la charge sur un nombre minimal de nœuds. Le but est de pouvoir éteindre les nœuds libérés et donc de minimiser la consommation énergétique du système. - Ensuite une approche centralisée a été proposée et fonctionne en associant une valeur de crédit à chaque nœud. Le crédit d'un nœud dépend de son affinité pour ses tâches, sa charge de travail actuelle et sa façon d'effectuer ses communications. Les économies d'énergie sont atteintes par la consolidation continue des machines virtuelles en fonction de l'utilisation actuelle des ressources, les topologies de réseaux virtuels établis entre les machines virtuelles et l'état thermique de nœuds de calcul. Les résultats de l'expérience sur une extension de CloudSim (EnerSim) montrent que l'énergie consommée par les applications du Cloud et l'efficacité énergétique ont été améliorées. - Le troisième axe est consacré à l'examen d'une approche appelée "Cooperative scheduling Anti load-balancing Algorithm for cloud". Il s'agit d'une approche décentralisée permettant la coopération entre les différents sites. Pour valider cet algorithme, nous avons étendu le simulateur MaGateSim. Avec une large évaluation expérimentale d'un ensemble de données réelles, nous sommes arrivés à la conclusion que l'approche à la fois en utilisant des algorithmes centralisés et décentralisés peut réduire l'énergie consommée des centres de données.The multiplication of Cloud computing has resulted in the establishment of largescale data centers around the world containing thousands of compute nodes. However, Cloud consume huge amounts of energy. Energy consumption of data centers worldwide is estimated at more than 1.5% of the global electricity use and is expected to grow further. A problem usually studied in distributed systems is to evenly distribute the load. But when the goal is to reduce energy consumption, this type of algorithms can lead to have machines largely under-loaded and therefore consuming energy unnecessarily. This thesis presents novel techniques, algorithms, and software for distributed dynamic consolidation of Virtual Machines (VMs) in Cloud. The main objective of this thesis is to provide energy-aware scheduling strategies in cloud computing for energy saving. To achieve this goal, we use centralized and decentralized approaches. Contributions in this method are presented these two axes. The objective of our approach is to reduce data center's total energy consumed by controlling cloud applications' overall energy consumption while ensuring cloud applications' service level agreement. Energy consumption is reduced by dynamically deactivating and reactivating physical nodes to meet the current resource demand. The key contributions are: - First, we present an energy aware clouds scheduling using anti-load balancing algorithm : concentrate the load on a minimum number of severs. The goal is to turn off the machines released and therefore minimize the energy consumption of the system. - The second axis proposed an algorithm which works by associating a credit value with each node. The credit of a node depends on its affinity to its jobs, its current workload and its communication behavior. Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs, and thermal state of computing nodes. The experiment results, obtained with a simulator which extends CloudSim (EnerSim), show that the cloud application energy consumption and energy efficiency are being improved. - The third axis is dedicated to the consideration of a decentralized dynamic scheduling approach entitled Cooperative scheduling Anti-load balancing Algorithm for cloud. It is a decentralized approach that allows cooperation between different sites. To validate this algorithm, we have extended the simulator MaGateSim. With an extensive experimental evaluation with a real workload dataset, we got the conclusion that both the approach using centralized and decentralized algorithms can reduce energy consumed by data centers

    Virtual Machine Management for Efficient Cloud Data Centers with Applications to Big Data Analytics

    Get PDF
    Infrastructure-as-a-Service (IaaS) cloud data centers offer computing resources in the form of virtual machine (VM) instances as a service over the Internet. This allows cloud users to lease and manage computing resources based on the pay-as-you-go model. In such a scenario, the cloud users run their applications on the most appropriate VM instances and pay for the actual resources that are used. To support the growing service demands of end users, cloud providers are now building an increasing number of large-scale IaaS cloud data centers, consisting of many thousands of heterogeneous servers. The ever increasing heterogeneity of both servers and VMs requires efficient management to balance the load in the data centers and, more importantly, to reduce the energy consumption due to underutilized physical servers. To achieve these goals, the key aspect is to eliminate inefficiencies while using computing resources. This dissertation investigates the VM management problem for efficient IaaS cloud data centers. In particular, it considers VM placement and VM consolidation to achieve effective load balancing and energy efficiency in cloud infrastructures. VM placement allows cloud providers to allocate a set of requested or migrating VMs onto physical servers with the goal to balance the load or minimize the number of active servers. While addressing the VM placement problem is important, VM consolidation is even more important to enable continuous reorganization of already-placed VMs on the least number of servers. It helps create idle servers during periods of low resource utilization by taking advantage of live VM migration provided by virtualization technologies. Energy consumption is then reduced by dynamically switching idle servers into a power saving state. As VM migrations and server switches consume additional energy, the frequency of VM migrations and server switches needs to be limited as well. This dissertation concludes with a sample application of distributed computing to big data analytics
    corecore