259 research outputs found

    Development of a virtualization systems architecture course for the information sciences and technologies department at the Rochester Institute of Technology (RIT)

    Get PDF
    Virtualization is a revolutionary technology that has changed the way computing is performed in data centers. By converting traditionally siloed computing assets to shared pools of resources, virtualization provides a considerable number of advantages such as more efficient use of physical server resources, more efficient use of datacenter space, reduced energy consumption, simplified system administration, simplified backup and disaster recovery, and a host of other advantages. Due to the considerable number of advantages, companies and organizations of various sizes have either migrated their workloads to virtualized environments or are considering virtualization of their workloads. As per Gartner Magic Quadrant for x86 Server Virtualization Infrastructure 2013 , roughly two-third of x86 server workloads are virtualized [1]. The need for virtualization solutions by companies and organizations has increased the demand for qualified virtualization professionals for planning, designing, implementing, and maintaining virtualized infrastructure of different scales. Although universities are the main source for educating IT professionals, the field of information technology is so dynamic and changing so rapidly that not all universities can keep pace with the change. As a result, providing the latest technology that is being used in the information technology industry in the curriculums of universities is a big advantage for information technology universities. Taking into consideration the trend toward virtualization in computing environments and the great demand for virtualization professionals in the industry, the faculty of Information Sciences and Technologies department at RIT decided to prepare a graduate course in the master\u27s program in Networking and System Administration entitled Virtualization Systems Architecture , which better prepares students to a find a career in the field of enterprise computing. This research is composed of five chapters. It starts by briefly going through the history of computer virtualization and exploring when and why it came into existence and how it evolved. The second chapter of the research goes through the challenges in virtualization of the x86 platform architecture and the solutions used to overcome the challenges. In the third chapter, various types of hypervisors are discussed and the advantages and disadvantages of each one are discussed. In the fourth chapter, the architecture and features of the two leading virtualization solutions are explored. Then in the final chapter, the research goes through the contents of the Virtualization Systems Architecture course

    Hüperviisorite ja virtuaalmasinate mäluhalduse analüüs

    Get PDF
    The goal of this thesis is to test memory optimization and reclamation tools in the most widely used hypervisors: VMware ESXi, Microsoft Hyper-V, KVM, and Xen. The aim is to measure how much memory could be reclaimed and optimized by different memory management algorithms across hypervisors mentioned above. Dedicated monitoring tools Zabbix and collectd are going to gather the data which will be analyzed. As a result, Hyper-V seems to be the most effective, with ESXi second and KVM falling somewhat behind in the third place. Xen failed to meet specifc criteria (automated memory optimization) which rendered it impractical to include in the testing process

    Welcome to Zombieland: Practical and Energy-efficient Memory Disaggregation in a Datacenter

    Get PDF
    In this paper, we propose an effortless way for disaggregating the CPU-memory couple, two of the most important resources in cloud computing. Instead of redesigning each resource board, the disaggregation is done at the power supply domain level. In other words, CPU and memory still share the same board, but their power supply domains are separated. Besides this disaggregation, we make the two following contributions: (1) the prototyping of a new ACPI sleep state (called zombie and noted Sz) which allows to suspend a server (thus save energy) while making its memory remotely accessible; and (2) the prototyping of a rack-level system software which allows the transparent utilization of the entire rack resources (avoiding resource waste). We experimentally evaluate the effectiveness of our solution and show that it can improve the energy efficiency of state-of-the-art consolidation techniques by up to 86%, with minimal additional complexity

    Multi-elastic Datacenters: Auto-scaled Virtual Clusters on Energy-Aware Physical Infrastructures

    Full text link
    [EN] Computer clusters are widely used platforms to execute different computational workloads. Indeed, the advent of virtualization and Cloud computing has paved the way to deploy virtual elastic clusters on top of Cloud infrastructures, which are typically backed by physical computing clusters. In turn, the advances in Green computing have fostered the ability to dynamically power on the nodes of physical clusters as required. Therefore, this paper introduces an open-source framework to deploy elastic virtual clusters running on elastic physical clusters where the computing capabilities of the virtual clusters are dynamically changed to satisfy both the user application's computing requirements and to minimise the amount of energy consumed by the underlying physical cluster that supports an on-premises Cloud. For that, we integrate: i) an elasticity manager both at the infrastructure level (power management) and at the virtual infrastructure level (horizontal elasticity); ii) an automatic Virtual Machine (VM) consolidation agent that reduces the amount of powered on physical nodes using live migration and iii) a vertical elasticity manager to dynamically and transparently change the memory allocated to VMs, thus fostering enhanced consolidation. A case study based on real datasets executed on a production infrastructure is used to validate the proposed solution. The results show that a multi-elastic virtualized datacenter provides users with the ability to deploy customized scalable computing clusters while reducing its energy footprint.The results of this work have been partially supported by ATMOSPHERE (Adaptive, Trustworthy, Manageable, Orchestrated, Secure, Privacy-assuring Hybrid, Ecosystem for Resilient Cloud Computing), funded by the European Commission under the Cooperation Programme, Horizon 2020 grant agreement No 777154.Alfonso Laguna, CD.; Caballer Fernández, M.; Calatrava Arroyo, A.; Moltó, G.; Blanquer Espert, I. (2018). Multi-elastic Datacenters: Auto-scaled Virtual Clusters on Energy-Aware Physical Infrastructures. Journal of Grid Computing. 17(1):191-204. https://doi.org/10.1007/s10723-018-9449-zS191204171Buyya, R.: High Performance Cluster Computing: Architectures and Systems. Prentice Hall PTR, Upper Saddle River (1999)de Alfonso, C., Caballer, M., Alvarruiz, F., Moltó, G.: An economic and energy-aware analysis of the viability of outsourcing cluster computing to the cloud. Futur. Gener. Comput. Syst. (Int. J. Grid Comput eScience) 29, 704–712 (2013). https://doi.org/10.1016/j.future.2012.08.014Williams, D., Jamjoom, H., Liu, Y.H., Weatherspoon, H.: Overdriver: handling memory overload in an oversubscribed cloud. ACM SIGPLAN Not. 46(7), 205 (2011). https://doi.org/10.1145/2007477.1952709 . http://dl.acm.org/citation.cfm?id=2007477.1952709Valentini, G., Lassonde, W., Khan, S., Min-Allah, N., Madani, S., Li, J., Zhang, L., Wang, L., Ghani, N., Kolodziej, J., Li, H., Zomaya, A., Xu, C.Z., Balaji, P., Vishnu, A., Pinel, F., Pecero, J., Kliazovich, D., Bouvry, P.: An overview of energy efficiency techniques in cluster computing systems. Clust. Comput. 16(1), 3–15 (2013). https://doi.org/10.1007/s10586-011-0171-xDe Alfonso, C., Caballer, M., Hernández, V.: Efficient power management in high performance computer clusters. In: Proceedings of the 1st International Multi-conference on Innovative Developments in ICT, Proceedings of the International Conference on Green Computing 2010 (ICGreen 2010), 39–44 (2010)OpenNebula: OpenNebula Cloud Software https://opennebula.org/ . [Online; accessed 12-June-2017]OpenStack: OpenStack Cloud Software. http://openstack.org . [Online; accessed 12 June 2017]VMWare: VMWare vCenter Server. https://www.vmware.com/products/vcenter-server.html . [Online; accessed 12 June 2017]De Alfonso, C., Blanquer, I.: Automatic consolidation of virtual machines in on-premises cloud platforms. In: IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp 1070–1079 (2017). https://doi.org/10.1109/CCGRID.2017.128Chase, J.S., Irwin, D.E., Grit, L.E., Moore, J.D., Sprenkle, S.E.: Dynamic virtual clusters in a grid site manager. In: Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing, HPDC ’03, p 90. IEEE Computer Society, Washington, DC (2003). http://dl.acm.org/citation.cfm?id=822087.823392Doelitzscher, F., Held, M., Reich, C., Sulistio, A.: Viteraas: Virtual cluster as a service. In: 2011 IEEE Third International Conference on Cloud Computing Technology and Science (CloudCom), pp 652–657 (2011). https://doi.org/10.1109/CloudCom.2011.101Wei, X., Wang, H., Li, H., Zou, L.: Dynamic deployment and management of elastic virtual clusters. In: 2011 Sixth Annual Chinagrid Conference (ChinaGrid), pp 35–41 (2011). https://doi.org/10.1109/ChinaGrid.2011.31de Assuncao, M.D., di Costanzo, A., Buyya, R.: Evaluating the cost-benefit of using cloud computing to extend the capacity of clusters. In: Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing, HPDC ’09, pp 141–150. ACM, New York (2009). https://doi.org/10.1145/1551609.1551635 . http://doi.acm.org/10.1145/1551609.1551635Marshall, P., Keahey, K., Freeman, T.: Elastic site: Using clouds to elastically extend site resources. In: 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing (CCGrid), pp 43–52 (2010). https://doi.org/10.1109/CCGRID.2010.80Niu, S., Zhai, J., Ma, X., Tang, X., Chen, W.: Cost-effective cloud hpc resource provisioning by building semi-elastic virtual clusters. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’13, pp 56:1–56:12. ACM, New York (2013). https://doi.org/10.1145/2503210.2503236 . http://doi.acm.org/10.1145/2503210.2503236Bialecki, A., Cafarella, M., Cutting, D., Omalley, O.: Hadoop: a framework for running applications on large clusters built of commodity hardware. Tech. rep. Apache Hadoop. http://hadoop.apache.org (2005)MIT: StarCluster Elastic Load Balancer. http://web.mit.edu/stardev/cluster/docs/0.92rc2/manual/load_balancer.htmlAppliance, C.C.S.: Creating elastic virtual clusters. http://cernvm.cern.ch/portal/elasticclusters (2015)Research project, T.G.: The games research project. http://www.green-datacenters.eu (2013)Cioara, T., Anghel, I., Salomie, I., Copil, G., Moldovan, D., Kipp, A.: Energy aware dynamic resource consolidation algorithm for virtualized service centers based on reinforcement learning. In: 2011 10th International Symposium on Parallel and Distributed Computing (ISPDC), pp 163–169 (2011). https://doi.org/10.1109/ISPDC.2011.32Farahnakian, F., Liljeberg, P., Plosila, J.: Energy-efficient virtual machines consolidation in cloud data centers using reinforcement learning. In: 2014 22nd Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp 500–507 (2014). https://doi.org/10.1109/PDP.2014.109Masoumzadeh, S., Hlavacs, H.: Integrating vm selection criteria in distributed dynamic vm consolidation using fuzzy q-learning. In: 2013 9th International Conference on Network and Service Management (CNSM), pp 332–338 (2013). https://doi.org/10.1109/CNSM.2013.6727854Feller, E., Rilling, L., Morin, C.: Energy-aware ant colony based workload placement in clouds. In: 2011 12th IEEE/ACM International Conference on Grid Computing (GRID), pp 26–33 (2011). https://doi.org/10.1109/Grid.2011.13Pop, C.B., Anghel, I., Cioara, T., Salomie, I., Vartic, I.: A swarm-inspired data center consolidation methodology. In: Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics, WIMS ’12, pp 41:1–41:7. ACM, New York (2012). https://doi.org/10.1145/2254129.2254180Marzolla, M., Babaoglu, O., Panzieri, F.: Server consolidation in clouds through gossiping. In: Proceedings of the 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, WOWMOM ’11, pp 1–6. IEEE Computer Society, Washington, DC (2011). https://doi.org/10.1109/WoWMoM.2011.5986483Ghafari, S., Fazeli, M., Patooghy, A., Rikhtechi, L.: Bee-mmt: A load balancing method for power consumption management in cloud computing. In: 2013 Sixth International Conference on Contemporary Computing (IC3), pp 76–80 (2013). https://doi.org/10.1109/IC3.2013.6612165Ajiro, Y., Tanaka, A.: Improving packing algorithms for server consolidation. In: International CMG Conference, pp. 399–406. Computer Measurement Group (2007)Verma, A., Ahuja, P., Neogi, A.: pmapper: power and migration cost aware application placement in virtualized systems. In: Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware, Middleware ’08, pp 243–264. Springer, New York (2008)Beloglazov, A., Abawajy, J., Buyya, R.: Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing. Future Gener. Comput. Syst. 28 (5), 755–768 (2012). https://doi.org/10.1016/j.future.2011.04.017Guazzone, M., Anglano, C., Canonico, M.: Exploiting vm migration for the automated power and performance management of green cloud computing systems. In: Proceedings of the First International Conference on Energy Efficient Data Centers, E2DC’12, pp 81–92. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-33645-4_8Shi, L., Furlong, J., Wang, R.: Empirical evaluation of vector bin packing algorithms for energy efficient data centers. In: 2013 IEEE Symposium on Computers and Communications (ISCC), pp 000,009–000,015 (2013). https://doi.org/10.1109/ISCC.2013.6754915Tomás, L., Tordsson, J.: Improving cloud infrastructure utilization through overbooking. In: Proceedings of the 2013 ACM Cloud and Autonomic Computing Conference on - CAC ’13, p 1. ACM Press, New York (2013). https://doi.org/10.1145/2494621.2494627Dawoud, W., Takouna, I., Meinel, C.: Elastic vm for cloud resources provisioning optimization. In: Abraham, A., Lloret Mauri, J., Buford, J., Suzuki, J., Thampi, S. (eds.) Advances in Computing and Communications, Communications in Computer and Information Science, vol. 190, pp 431–445. Springer, Berlin (2011). https://doi.org/10.1007/978-3-642-22709-7_43Tasoulas, E., Haugerund, H.R., Begnum, K.: Bayllocator: a proactive system to predict server utilization and dynamically allocate memory resources using Bayesian networks and ballooning. In: Proceedings of the 26th International Conference on Large Installation System Administration: Strategies, Tools, and Techniques, pp. 111–122. USENIX Association (2012)Hines, M.R., Gordon, A., Silva, M., Da Silva, D., Ryu, K., Ben-Yehuda, M.: Applications know best: performance-driven memory overcommit with Ginkgo. In: 2011 IEEE Third International Conference on Cloud Computing Technology and Science, pp. 130–137. IEEE. https://doi.org/10.1109/CloudCom.2011.27 (2011)Litke, A.: Manage resources on overcommitted KVM hosts. Tech. rep. IBM. http://www.ibm.com/developerworks/library/l-overcommit-kvm-resources/ (2011)De Alfonso, C., Caballer, M., Alvarruiz, F., Hernández, V.: An energy management system for cluster infrastructures. Comput. Electr. Eng. 39(8), 2579–2590 (2013). https://doi.org/10.1016/j.compeleceng.2013.05.004Moltó, G., Caballer, M, de Alfonso, C.: Automatic memory-based vertical elasticity and oversubscription on cloud platforms. Futur. Gener. Comput. Syst. 56, 1–10 (2016). https://doi.org/10.1016/j.future.2015.10.002Calatrava, A., Romero, E., Moltó, G., Caballer, M., Alonso, J.M.: Self-managed cost-efficient virtual elastic clusters on hybrid Cloud infrastructures. Futur. Gener. Comput. Syst. 61, 13–25 (2016). https://doi.org/10.1016/j.future.2016.01.018 . http://authors.elsevier.com/sd/article/S0167739X16300024 , http://linkinghub.elsevier.com/retrieve/pii/S0167739X16300024Caballer, M., Chatziangelou, M., Calatrava, A., Moltó, G., Pérez, A.: IM integration in the EGI VMOps Dashboard. In: EGI Conference 2017 and INDIGO Summit 2017 (2017)Calatrava, A., Caballer, M., Moltó, G., Pérez, A.: Virtual Elastic Clusters in the EGI LToS with EC3. In: EGI Conference 2017 and INDIGO Summit 2017 (2017)Iosup, A., Li, H., Jan, M., Anoep, S., Dumitrescu, C., Wolters, L., Epema, D.H.: The grid workloads archive. Futur. Gener. Comput. Syst. 24(7), 672–686 (2008). https://doi.org/10.1016/j.future.2008.02.003 . http://www.sciencedirect.com/science/article/pii/S0167739X08000125Nordugrid dataset, the grid workloads archive (Online; accessed 27-March-2017). http://gwa.ewi.tudelft.nl/datasets/gwa-t-3-nordugrid/report/Caballer, M., Blanquer, I., Moltó, G., de Alfonso, C: Dynamic Management of Virtual Infrastructures. J. Grid Comput. 13, 53–70 (2015). https://doi.org/10.1007/s10723-014-9296-5 . http://link.springer.com/article/10.1007/s10723-014-9296-

    Automatic memory-based vertical elasticity and oversubscription on cloud platforms

    Full text link
    Hypervisors and Operating Systems support vertical elasticity techniques such as memory ballooning to dynamically assign the memory of Virtual Machines (VMs). However, current Cloud Management Platforms (CMPs), such as OpenNebula or OpenStack, do not currently support dynamic vertical elasticity. This paper describes a system that integrates with the CMP to provide automatic vertical elasticity to adapt the memory size of the VMs to their current memory consumption, featuring live migration to prevent overload scenarios, without downtime for the VMs. This enables an enhanced VM-per-host consolidation ratio while maintaining the Quality of Service for VMs, since their memory is dynamically increased as necessary. The feasibility of the development is assessed via two case studies based on OpenNebula featuring (i) horizontal and vertical elastic virtual clusters on a production Grid infrastructure and (ii) elastic multi-tenant VMs that run Docker containers coupled with live migration techniques. The results show that memory oversubscription can be integrated on CMPs to deliver automatic memory management without severely impacting the performance of the VMs. This results in a memory management framework for on-premises Clouds that features live migration to safely enable transient oversubscription of physical resources in a CMP. © 2015 Elsevier B.V. All rights reserved.The authors would like to thank the Spanish "Ministerio de Economia y Competitividad" for the project CLUVIEM (TIN2013-44390-R) and the European Commission for the project INDIGO-DataCloud with grant number 653549.Moltó, G.; Caballer Fernández, M.; Alfonso Laguna, CD. (2016). Automatic memory-based vertical elasticity and oversubscription on cloud platforms. Future Generation Computer Systems. 56:1-10. https://doi.org/10.1016/j.future.2015.10.002S1105

    Cooperative Resource Management in a IaaS

    Get PDF
    International audienceVirtualized IaaS generally rely on a server consolidation system to pack virtual machines (VMs) on as few servers as possible, for energy saving. However, two situations are not taken into account, and could enhance consolidation. First, since the managed VMs can be of various sizes (small, medium, large, etc.), VMs packing can be obstructed when sizes don't fit available spaces on servers. Therefore, we would need to "split" such VMs. Second, two VMs which host replicas of the same application server (for scalability) could be "fusion Ned" when they are located on the same physical server, in order to reduce virtualization overhead and VMs memory footprint. Split and fusion operations lead to the management of elastic VMs and requires cooperation between the application level and the provider level, as they impact management at both levels. In this paper, we propose a IaaS resource management system which implements elastic VMs based on split/fusion operations and cooperative management. We show its benefit with a set of experiments

    Efficient and elastic management of computing infrastructures

    Full text link
    Tesis por compendio[EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed. Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications. The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud.[ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando. En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final. En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado.[CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant. En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final. En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat.Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187Compendi

    Improving energy efficiency of virtualized datacenters

    Get PDF
    Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server

    CloudScope: diagnosing and managing performance interference in multi-tenant clouds

    Get PDF
    © 2015 IEEE.Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%

    An innovative approach to performance metrics calculus in cloud computing environments: a guest-to-host oriented perspective

    Get PDF
    In virtualized systems, the task of profiling and resource monitoring is not straight-forward. Many datacenters perform CPU overcommittment using hypervisors, running multiple virtual machines on a single computer where the total number of virtual CPUs exceeds the total number of physical CPUs available. From a customer point of view, it could be indeed interesting to know if the purchased service levels are effectively respected by the cloud provider. The innovative approach to performance profiling described in this work is based on the use of virtual performance counters, only recently made available by some hypervisors to their virtual machines, to implement guest-wide profiling. Although it isn't possible for the virtual machine to access Virtual Machine Monitor, with this method it is able to gather interesting informations to deduce the state of resource overcommittment of the virtualization host where it is executed. Tests have been carried out inside the compute nodes of FIWARE Genoa Node, an instance of a widely distributed federated community cloud, based on OpenStack and KVM. AgiLab-DITEN, the laboratory I belonged to and where I conducted my studies, together with TnT-Lab\u2013DITEN and CNIT-GE-Unit designed, installed and configured the whole Genoa Node, that was hosted on DITEN-UniGE equipment rooms. All the software measuring instruments, operating systems and programs used in this research are publicly available and free, and can be easily installed in a micro instance of virtual machine, rapidly deployable also in public clouds
    corecore