230 research outputs found

    Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications

    Get PDF
    Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Efficient and elastic management of computing infrastructures

    Full text link
    Tesis por compendio[EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed. Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications. The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud.[ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando. En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final. En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado.[CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant. En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final. En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat.Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187Compendi

    New cross-layer techniques for multi-criteria scheduling in large-scale systems

    Get PDF
    The global ecosystem of information technology (IT) is in transition to a new generation of applications that require more and more intensive data acquisition, processing and storage systems. As a result of that change towards data intensive computing, there is a growing overlap between high performance computing (HPC) and Big Data techniques in applications, since many HPC applications produce large volumes of data, and Big Data needs HPC capabilities. The hypothesis of this PhD. thesis is that the potential interoperability and convergence of the HPC and Big Data systems are crucial for the future, being essential the unification of both paradigms to address a broad spectrum of research domains. For this reason, the main objective of this Phd. thesis is purposing and developing a monitoring system to allow the HPC and Big Data convergence, thanks to giving information about behaviors of applications in a system which execute both kind of them, giving information to improve scalability, data locality, and to allow adaptability to large scale computers. To achieve this goal, this work is focused on the design of resource monitoring and discovery to exploit parallelism at all levels. These collected data are disseminated to facilitate global improvements at the whole system, and, thus, avoid mismatches between layers. The result is a two-level monitoring framework (both at node and application level) with a low computational load, scalable, and that can communicate with different modules thanks to an API provided for this purpose. All data collected is disseminated to facilitate the implementation of improvements globally throughout the system, and thus avoid mismatches between layers, which combined with the techniques applied to deal with fault tolerance, makes the system robust and with high availability. On the other hand, the developed framework includes a task scheduler capable of managing the launch of applications, their migration between nodes, as well as the possibility of dynamically increasing or decreasing the number of processes. All these thanks to the cooperation with other modules that are integrated into LIMITLESS, and whose objective is to optimize the execution of a stack of applications based on multi-criteria policies. This scheduling mode is called coarse-grain scheduling based on monitoring. For better performance and in order to further reduce the overhead during the monitorization, different optimizations have been applied at different levels to try to reduce communications between components, while trying to avoid the loss of information. To achieve this objective, data filtering techniques, Machine Learning (ML) algorithms, and Neural Networks (NN) have been used. In order to improve the scheduling process and to design new multi-criteria scheduling policies, the monitoring information has been combined with other ML algorithms to identify (through classification algorithms) the applications and their execution phases, doing offline profiling. Thanks to this feature, LIMITLESS can detect which phase is executing an application and tries to share the computational resources with other applications that are compatible (there is no performance degradation between them when both are running at the same time). This feature is called fine-grain scheduling, and can reduce the makespan of the use cases while makes efficient use of the computational resources that other applications do not use.El ecosistema global de las tecnologías de la información (IT) se encuentra en transición a una nueva generación de aplicaciones que requieren sistemas de adquisición de datos, procesamiento y almacenamiento cada vez más intensivo. Como resultado de ese cambio hacia la computación intensiva de datos, existe una superposición, cada vez mayor, entre la computación de alto rendimiento (HPC) y las técnicas Big Data en las aplicaciones, pues muchas aplicaciones HPC producen grandes volúmenes de datos, y Big Data necesita capacidades HPC. La hipótesis de esta tesis es que hay un gran potencial en la interoperabilidad y convergencia de los sistemas HPC y Big Data, siendo crucial para el futuro tratar una unificación de ambos para hacer frente a un amplio espectro de problemas de investigación. Por lo tanto, el objetivo principal de esta tesis es la propuesta y desarrollo de un sistema de monitorización que facilite la convergencia de los paradigmas HPC y Big Data gracias a la provisión de datos sobre el comportamiento de las aplicaciones en un entorno en el que se pueden ejecutar aplicaciones de ambos mundos, ofreciendo información útil para mejorar la escalabilidad, la explotación de la localidad de datos y la adaptabilidad en los computadores de gran escala. Para lograr este objetivo, el foco se ha centrado en el diseño de mecanismos de monitorización y localización de recursos para explotar el paralelismo en todos los niveles de la pila del software. El resultado es un framework de monitorización en dos niveles (tanto a nivel de nodo como de aplicación) con una baja carga computacional, escalable, y que se puede comunicar con distintos módulos gracias a una API proporcionada para tal objetivo. Todos datos recolectados se difunden para facilitar la realización de mejoras de manera global en todo el sistema, y así evitar desajustes entre capas, lo que combinado con las técnicas aplicadas para lidiar con la tolerancia a fallos, hace que el sistema sea robusto y con una alta disponibilidad. Por otro lado, el framework desarrollado incluye un planificador de tareas capaz de gestionar el lanzamiento de aplicaciones, la migración de las mismas entre nodos, además de la posibilidad de incrementar o disminuir su número de procesos de forma dinámica. Todo ello gracias a la cooperación con otros módulos que se integran en LIMITLESS, y cuyo objetivo es optimizar la ejecución de una pila de aplicaciones en base a políticas multicriterio. Esta funcionalidad se llama planificación de grano grueso. Para un mejor desempeño y con el objetivo de reducir más aún la carga durante la ejecución, se han aplicado distintas optimizaciones en distintos niveles para tratar de reducir las comunicaciones entre componentes, a la vez que se trata de evitar la pérdida de información. Para lograr este objetivo se ha hecho uso de técnicas de filtrado de datos, algoritmos de Machine Learning (ML), y Redes Neuronales (NN). Finalmente, para obtener mejores resultados en la planificación de aplicaciones y para diseñar nuevas políticas de planificación multi-criterio, los datos de monitorización recolectados han sido combinados con nuevos algoritmos de ML para identificar (por medio de algoritmos de clasificación) aplicaciones y sus fases de ejecución. Todo ello realizando tareas de profiling offline. Gracias a estas técnicas, LIMITLESS puede detectar en qué fase de su ejecución se encuentra una determinada aplicación e intentar compartir los recursos de computacionales con otras aplicaciones que sean compatibles (no se produce una degradación del rendimiento entre ellas cuando ambas se ejecutan a la vez en el mismo nodo). Esta funcionalidad se llama planificación de grano fino y puede reducir el tiempo total de ejecución de la pila de aplicaciones en los casos de uso porque realiza un uso más eficiente de los recursos de las máquinas.This PhD dissertation has been partially supported by the Spanish Ministry of Science and Innovation under an FPI fellowship associated to a National Project with reference TIN2016-79637-P (from July 1, 2018 to October 10, 2021)Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Félix García Carballeira.- Secretario: Pedro Ángel Cuenca Castillo.- Vocal: María Cristina V. Marinesc

    A survey on elasticity management in PaaS systems

    Full text link
    [EN] Elasticity is a goal of cloud computing. An elastic system should manage in an autonomic way its resources, being adaptive to dynamic workloads, allocating additional resources when workload is increased and deallocating resources when workload decreases. PaaS providers should manage resources of customer applications with the aim of converting those applications into elastic services. This survey identifies the requirements that such management imposes on a PaaS provider: autonomy, scalability, adaptivity, SLA awareness, composability and upgradeability. This document delves into the variety of mechanisms that have been proposed to deal with all those requirements. Although there are multiple approaches to address those concerns, providers main goal is maximisation of profits. This compels providers to look for balancing two opposed goals: maximising quality of service and minimising costs. Because of this, there are still several aspects that deserve additional research for finding optimal adaptability strategies. Those open issues are also discussed.This work has been partially supported by EU FEDER and Spanish MINECO under research Grant TIN2012-37719-C03-01.Muñoz-Escoí, FD.; Bernabeu Aubán, JM. (2017). A survey on elasticity management in PaaS systems. Computing. 99(7):617-656. https://doi.org/10.1007/s00607-016-0507-8S617656997Ajmani S (2004) Automatic software upgrades for distributed systems. PhD thesis, Department of Electrical and Computer Science, Massachusetts Institute of Technology, USAAjmani S, Liskov B, Shrira L (2006) Modular software upgrades for distributed systems. In: 20th European Conference on Object-Oriented Programming (ECOOP), Nantes, France, pp 452–476Alhamad M, Dillon TS, Chang E (2010) Conceptual SLA framework for cloud computing. In: 4th International Conference on Digital Ecosystems and Technologies (DEST), Dubai, pp 606–610Almeida S, Leitão J, Rodrigues LET (2013) ChainReaction: a causal+ consistent datastore based on chain replication. In: 8th EuroSys Conference, Prague, Czech Republic, pp 85–98Araujo J, Matos R, Maciel PRM, Matias R (2011) Software aging issues on the Eucalyptus cloud computing infrastructure. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), Anchorage, Alaska, USA, pp 1411–1416Arief LB, Speirs NA (2000) A UML tool for an automatic generation of simulation programs. In: Worshop on Software and Performance (WOSP), Ottawa, Canada, pp 71–76Armbrust M, Fox A, Griffith R, Joseph AD, Katz RH, Konwinski A, Lee G, Patterson DA, Rabkin A, Stoica I, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58Bailis P, Ghodsi A (2013) Eventual consistency today: limitations, extensions, and beyond. Commun ACM 56(5):55–63Bailis P, Ghodsi A, Hellerstein JM, Stoica I (2013) Bolt-on causal consistency. In: Intnl Conf Mgmnt Data (SIGMOD). NY, USA, New York, pp 761–772Balsamo S, Marco AD, Inverardi P, Simeoni M (2004) Model-based performance prediction in software development: a survey. IEEE Trans Softw Eng 30(5):295–310Barham P, Dragovic B, Fraser K, Hand S, Harris TL, Ho A, Neugebauer R, Pratt I, Warfield A (2003) Xen and the art of virtualization. In: 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA, pp 164–177Bennani MN, Menascé DA (2005) Resource allocation for autonomic data centers using analytic performance models. In: 2nd Intnl Conf Auton Comput (ICAC), Seattle, WA, USA, pp 229–240Birman KP (1996) Building Secure and Reliable Network Applications. Manning Publications Co., ISBN 1-884777-29-5Bloom T (1983) Dynamic module replacement in a distributed programming system. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USABloom T, Day M (1993) Reconfiguration and module replacement in Argus: theory and practice. Softw Eng J 8(2):102–108Caballer M, Segrelles Quilis JD, Moltó G, Blanquer I (2015) A platform to deploy customized scientific virtual infrastructures on the cloud. Concurr Comput Pract E 27(16):4318–4329Calatrava A, Romero E, Moltó G, Caballer M, Alonso JM (2016) Self-managed cost-efficient virtual elastic clusters on hybrid cloud infrastructures. Future Gener Comp Syst 61:13–25Calcavecchia NM, Caprarescu BA, Nitto ED, Dubois DJ, Petcu D (2012) DEPAS: a decentralized probabilistic algorithm for auto-scaling. Computing 94(8–10):701–730Casalicchio E, Silvestri L (2013) Mechanisms for SLA provisioning in cloud-based service providers. Comput Netw 57(3):795–810Casalicchio E, Menascé DA, Aldhalaan A (2013) Autonomic resource provisioning in cloud systems with availability goals. In: ACM Cloud Autonomic Computing Conference (CAC), FL, USA, Miami, pp 1–10Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M, Chandra T, Fikes A, Gruber RE (2008) Bigtable: a distributed storage system for structured data. ACM Trans Comput Syst 26(2):4Copil G, Trihinas D, Truong HL, Moldovan D, Pallis G, Dustdar S, Dikaiakos MD (2014) ADVISE—A framework for evaluating cloud service elasticity behavior. In: 12th International Conference on Service-Oriented Computing (ICSOC), France, Paris, pp 275–290Cotroneo D, Natella R, Pietrantuono R, Russo S (2014) A survey of software aging and rejuvenation studies. ACM J Emerg Technol 10(1):8:1–8:34Coutinho EF, de Carvalho Sousa FR, Rego PAL, Gomes DG, de Souza JN (2015) Elasticity in cloud computing: a survey. Ann Telecommun 70(15):289–309Dawoud W, Takouna I, Meinel C (2011) Elastic VM for cloud resources provisioning optimization. In: 1st International Conference on Advances in Computing and Communications (ACC), Kochi, India, pp 431–445de Juan-Marín R, Decker H, Armendáriz-Íñigo JE, Bernabéu-Aubán JM, Muñoz-EscoíFD (2015) Scalability approaches for causal multicast: a survey. Computing (in press)de Miguel M, Lambolais T, Hannouz M, Betgé-Brezetz S, Piekarec S (2000) UML extensions for the specification and evaluation of latency constraints in architectural models. In: Workshop on Software and Performance (WOSP), Ottawa, Canada, pp 83–88Demers AJ, Greene DH, Hauser C, Irish W, Larson J, Shenker S, Sturgis HE, Swinehart DC, Terry DB (1987) Epidemic algorithms for replicated database maintenance. In: 6th ACM Symposium on Principles of Distributed Computing (PODC), Vancouver, Canada, pp 1–12Dustdar S, Guo Y, Satzger B, Truong HL (2011) Principles of elastic processes. IEEE Internet Comput 15(5):66–71Emeakaroha VC, Brandic I, Maurer M, Dustdar S (2013) Cloud resource provisioning and SLA enforcement via LoM2HiS framework. Concurr Comput Pract E 25(10):1462–1481Felter W, Ferreira A, Rajamony R, Rubio J (2015) An updated performance comparison of virtual machines and Linux containers. In: IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Philadelphia, PA, USA, pp 171–172Fox A, Brewer EA (1999) Harvest, yield and scalable tolerant systems. In: 7th Workshop on Hot Topics in Operating Systems (HotOS), Rio Rico, Arizona, USA, pp 174–178Galante G, De Bona LCE (2012) A survey on cloud computing elasticity. In: 5th International Conference on Utility and Cloud Computing (UCC), Chicago, IL, USA, pp 263–270Galante G, De Bona LCE, Mury AR, Schulze B, Righi RR (2016) An analysis of public clouds elasticity in the execution of scientific applications: a survey. J Grid Comput 14(2):193–216Gambi A, Hummer W, Truong HL, Dustdar S (2013) Testing elastic computing systems. IEEE Internet Comput 17(6):76–82Garg S, van Moorsel APA, Vaidyanathan K, Trivedi KS (1998) A methodology for detection and estimation of software aging. In: 9th International Symposium on Software Reliability Engineering (ISSRE), Paderborn, Germany, pp 283–292Gey F, Landuyt DV, Joosen W (2015) Middleware for customizable multi-staged dynamic upgrades of multi-tenant SaaS applications. In: 8th IEEE/ACM International Conference on Utility and Cloud Computing (UCC), Limassol, Cyprus, pp 102–111Gilbert S, Lynch NA (2002) Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2):51–59Gong Z, Gu X, Wilkes J (2010) PRESS: PRedictive Elastic reSource Scaling for cloud systems. In: 6th International Conference on Network and Service Management (CNSM), Niagara Falls, Canada, pp 9–16Grozev N, Buyya R (2014) Inter-cloud architectures and application brokering: taxonomy and survey. Softw Pract Exp 44(3):369–390Hammer M (2009) How to touch a running system. reconfiguration of stateful components. PhD thesis, Facultät für Mathematik, Informatik und Statistik, Ludwig-Maximilians-Universität München, Munich, GermanyHasan MZ, Magana E, Clemm A, Tucker L, Gudreddi SLD (2012) Integrated and autonomic cloud resource scaling. In: IEEE Network Operations and Management Symposium (NOMS), Maui, HI, USA, pp 1327–1334Herbst NR, Kounev S, Reussner R (2013) Elasticity in cloud computing: What it is, and what it is not. In: 10th International Conference on Autonomic Computing (ICAC), San Jose, CA, USA, pp 23–27Hermanns H, Herzog U, Katoen J (2002) Process algebra for performance evaluation. Theor Comput Sci 274(1–2):43–87Horn P (2001) Autonomic computing: IBM’s perspective on the state of information technology. Tech. rep. IBM PressHuebscher MC, McCann JA (2008) A survey of autonomic computing—degrees, models, and applications. ACM Comput Surv 40(3):7Hwang J, Zeng S, Wu F, Wood T (2013) A component-based performance comparison of four hypervisors. In: International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 269–276IBM (2006) An architectural blueprint for autonomic computing. White paper, 4th edIosup A, Ostermann S, Yigitbasi N, Prodan R, Fahringer T, Epema DHJ (2011) Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans Parallel Distrib Syst 22(6):931–945Ivanovic D, Carro M, Hermenegildo MV (2013) A sharing-based approach to supporting adaptation in service compositions. Computing 95(6):453–492Jiang Y, Perng C, Li T, Chang RN (2011) ASAP: A self-adaptive prediction system for instant cloud resource demand provisioning. In: 11th International Conference on Data Mining (ICDM), Vancouver, Canada, pp 1104–1109Johnson PR, Thomas RH (1975) The maintenance of duplicate databases. RFC 677, Network Working Group, Internet Engineering Task ForceKephart JO, Chess DM (2003) The vision of autonomic computing. IEEE Comput 36(1):41–50Kiviti A, Laor D, Costa G, Enberg P, Har’El N, Marti D, Zolotarov V (2014) OSv—Optimizing the operating system for virtual machines. In: USENIX Annual Technical Conference (ATC), Philadelphia, PA, USA, pp 61–72Knauth T, Fetzer C (2011) Scaling non-elastic applications using virtual machines. In: IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 468–475Knauth T, Fetzer C (2014) DreamServer: truly on-demand cloud services. In: International Conference on Systems and Storage (SYSTOR), Haifa, Israel, pp 1–11Kramer J, Magee J (1990) The evolving philosophers problem: dynamic change management. IEEE Trans Softw Eng 16(11):1293–1306Lakshman A, Malik P (2010) Cassandra: a decentralized structured storage system. Oper Syst Rev 44(2):35–40Lang W, Shankar S, Patel JM, Kalhan A (2014) Towards multi-tenant performance SLOs. IEEE Trans Knowl Data Eng 26(6):1447–1463Langner F, Andrzejak A (2013) Detecting software aging in a cloud computing framework by comparing development versions. In: IFIP/IEEE International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 896–899Lazowska ED, Zahorjan J, Graham GS, Sevcik KC (1984) Quantitative system performance. Computer system analysis using queueing network models. Prentice Hall, Upper Saddle RiverLeitner P, Michlmayr A, Rosenberg F, Dustdar S (2010) Monitoring, prediction and prevention of SLA violations in composite services. In: IEEE International Conference on Web Services (ICWS), Florida, USA, Miami, pp 369–376Li W (2011) Evaluating the impacts of dynamic reconfiguration on the QoS of running systems. J Syst Softw 84(12):2123–2138Lim HC, Babu S, Chase JS, Parekh SS (2009) Automated control in cloud computing: challenges and opportunities. In: 1st ACM Workshop Automated Control Datacenters Clouds (ACDC), Barcelona, Spain, pp 13–18Liu J, Zhou J, Buyya R (2015) Software rejuvenation based fault tolerance scheme for cloud applications. In: 8th IEEE International Conference on Cloud Computing (CLOUD), New York City, NY, USA, pp 1115–1118Lorido-Botran T, Miguel-Alonso J, Lozano JA (2014) A review of auto-scaling techniques for elastic applications in cloud environments. J Grid Comput 12(4):559–592Massie M, Li B, Nicholes B, Vuksan V, Alexander R, Buchbinder J, Costa F, Dean A, Josephsen D, Phaal P, Pocock D (2012) Monitoring with Ganglia. O’Reilly Media, Tracking Dynamic Host and Application Metrics at Scale. ISBN 978-1-4493-2970-9Matias R Jr, Andrzejak A, Machida F, Elias D, Trivedi KS (2014) A systematic differential analysis for fast and robust detection of software aging. In: 33rd IEEE Symposium on Reliable Distributed Systems (SRDS). Nara, Japan, pp 311–320Medina V, García JM (2014) A survey of migration mechanisms of virtual machines. ACM Comput Surv 46(3):30Mell P, Grance T (2011) The NIST definition of cloud computing. Recommendations of the National Institute of Standards and Technology, Special Publication 800-145Menascé DA, Bennani MN (2006) Autonomic virtualized environments. In: International Conference on Autonomic and Autonomous Systems (ICAS), Silicon Valley, California, USA, p 28Menascé DA, Ngo P (2009) Understanding cloud computing: Experimentation and capacity planning. In: 35th International Computer Measurement Group Conference, Dallas, TX, USAMenascé DA, Ruan H, Gomaa H (2007) QoS management in service-oriented architectures. Perform Eval 64(7–8):646–663Miedes E, Muñoz-Escoí FD (2010) Dynamic switching of total-order broadcast protocols. In: International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), Las Vegas, Nevada, USA, pp 457–463Mohamed M (2014) Generic monitoring and reconfiguration for service-based applications in the cloud. PhD thesis, Université d’Evry-Val d’Essonne, FranceMohamed M, Amziani M, Belaïd D, Tata S, Melliti T (2015) An autonomic approach to manage elasticity of business processes in the cloud. Future Gener Comp Sys 50(C):49–61Mohd Yusoh ZI (2013) Composite SaaS resource management in cloud computing using evolutionary computation. PhD thesis, Sc Eng Faculty, Queensland University of Technology, Brisbane, AustraliaMontero RS, Moreno-Vozmediano R, Llorente IM (2011) An elasticity model for high throughput computing clusters. J Parallel Distrib Comput 71(6):750–757Morabito R, Kjällman J, Komu M (2015) Hypervisors vs. lightweight virtualization: a performance comparison. In: IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, pp 386–393Najjar A, Serpaggi X, Gravier C, Boissier O (2014) Survey of elasticity management solutions in cloud computing. In: Mahmood Z (ed) Continued rise of the cloud: advances and trends in cloud computing. Springer, Berlin, pp 235–263Naskos A, Gounaris A, Sioutas S (2015) Cloud elasticity: a survey. In: 1st International Workshop on Algorithmic Aspects of Cloud Computing (ALGOCLOUD), Patras, Greece, pp 151–167Neamtiu I, Dumitras T (2011) Cloud software upgrades: challenges and opportunities. In: IEEE International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems (MESOCA), Williamsburg, VA, USA, pp 1–10Neuman BC (1994) Scale in distributed systems. In: Singhal M, Casavant TL (eds) Readings in Distributed computing systems. IEEE-CS Press, Los Alamitos, pp 463–489Padala P, Shin KG, Zhu X, Uysal M, Wang Z, Singhal S, Merchant A, Salem K (2007) Adaptive control of virtualized resources in utility computing environments. In: EuroSys Conference Lisbon, Portugal, pp 289–302Parnas DL (1994) Software aging. In: 6th International Conference on Software Engineering (ICSE), Sorrento, Italy, pp 279–287Parzen E (1960) A survey on time series analysis. Tech. rep., n. 37, Applied Mathematics and Statistics Laboratory, Stanford University, Stanford, CA, USAPascual-Miret L, González de Mendívil JR, Bernabéu-Aubán JM, Muñoz-Escoí FD (2015) Widening CAP consistency. Tech. rep., IUMTI-SIDI-2015/003, Univ. Politècnica de València, Valencia, SpainPopek GJ, Goldberg RP (1974) Formal requirements for virtualizable third generation architectures. Commun ACM 17(7):412–421Potter S, Nieh J (2005) AutoPod: Unscheduled system updates with zero data loss. In: 2nd International Conference on Autonomic Computing (ICAC), Seattle, WA, USA, pp 367–368Rajagopalan S (2014) System support for elasticity and high availability. PhD thesis, The University of British Columbia, Vancouver, CanadaReinecke P, Wolter K, van Moorsel APA (2010) Evaluating the adaptivity of computing systems. Perform Eval 67(8):676–693Rolia JA, Sevcik KC (1995) The method of layers. IEEE Trans Softw Eng 21(8):689–700Roy N, Dubey A, Gokhale AS (2011) Efficient autoscaling in the cloud using predictive models for workload forecasting. In: 4th IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 500–507Ruiz-Fuertes MI, Muñoz-Escoí FD (2009) Performance evaluation of a metaprotocol for database replication adaptability. In: 28th IEEE Symposium on Reliable Distributed Systems (SRDS), Niagara Falls, New York, USA, pp 32–38Saito Y, Shapiro M (2005) Optimistic replication. ACM Comput Surv 37(1):42–81Seifzadeh H, Abolhassani H, Moshkenani MS (2013) A survey of dynamic software updating. J Softw Evol Process 25(5):535–568Sharma U, Shenoy PJ, Sahu S, Shaikh A (2011) A cost-aware elasticity provisioning system for the cloud. In: International Conference on Distributed Computing Systems (ICDCS), Minneapolis, Minnesota, USA, pp 559–570Shen M, Kshemkalyani AD, Hsu TY (2015) Causal consistency for geo-replicated cloud storage under partial replication. In: International Parallel and Distributed Processing Symposium (IPDPS) Workshop, Hyderabad, India, pp 509–518Shen Z, Subbiah S, Gu X, Wilkes J (2011) CloudScale: elastic resource scaling for multi-tenant cloud systems. In: ACM Symposium on Cloud Computing (SOCC), Cascais, Portugal, p 5Simoes R, Kamienski CA (2014) Elasticity management in private and hybrid clouds. In: 7th IEEE International Conference on Cloud Computing (CLOUD), Anchorage, AK, USA, pp 793–800Singh S, Chana I (2015) QoS-aware autonomic resource management in cloud computing: a systematic review. ACM Comput Surv 48(3):42:1–42:46Smith CU (1980) The prediction and evaluation of the performance of software from extended design specifications. PhD thesis, Department of Computer Science, The University of Texas at Austin, USASmith CU, Williams LG (2003) Software performance engineering. In: Lavagno L, Martin G, Selic B (eds) UML for real. Design of embedded real-time systems, chap 16. Springer, Berlin, pp 343–365Solarski M (2004) Dynamic upgrade of distributed software components. PhD thesis, Fakultät IV Elektronik und Informatik, Technischen Universität Berlin, Berlin, GermanySoltesz S, Pötzl H, Fiuczynski ME, Bavier AC, Peterson LL (2007) Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. In: European Conference, Lisbon, Portugal, pp 275–287Soules CAN, Appavoo J, Hui K, Wisniewski RW, Silva DD, Ganger GR, Krieger O, Stumm M, Auslander MA, Ostrowski M, Rosenburg BS, Xenidis J (2003) System support for online reconfiguration. In: USENIX Annual Technical Conference. San Antonio, Texas, USA, pp 141–154Sridharan S (2012) A performance comparison of hypervisors for cloud computing. Master Thesis (paper 269), School of Computing, University of North Florida, USAStonebraker M (1986) The case for shared nothing. IEEE Database Eng Bull 9(1):4–9Sun D, Guimarans D, Fekete A, Gramoli V, Zhu L (2015) Multi-objective optimisation of rolling upgrade allowing for failures in clouds. In: 34th IEEE Symposium on Reliable Distributed Systems (SRDS). Montreal, QC, Canada, pp 68–73Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. The MIT Press, CambridgeToosi AN, Calheiros RN, Buyya R (2014) Interconnected cloud computing environments: challenges, taxonomy, and survey. ACM Comput Surv 47(1):7:1–7:47Vaquero González LM, Rodero-Merino L, Cáceres J, Lindner MA (2009) A break in the clouds: towards a cloud definition. Comput Commun Rev 39(1):50–55Varrette S, Guzek M, Plugaru V, Besseron X, Bouvry P (2013) HPC performance and energy-efficiency of Xen, KVM and VMware hypervisors. In: 25th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). Porto de Galinhas, Pernambuco, Brazil, pp 89–96Vasic N, Novakovic DM, Miucin S, Kostic D, Bianchini R (2012) DejaVu: accelerating resource allocation in virtualized environments. In: 17th nternational Conference on Architectural Support for Programing Languages and Operating Systems (ASPLOS), London, UK, pp 423–436Vaughan-Nichols SJ (2006) New approach to virtualization is a lightweight. IEEE Comput 39(11):12–14Vogels W (2009) Eventually consistent. Commun ACM 52(1):40–44Wada H, Suzuki J, Yamano Y, Oba K (2011) Evolutionary deployment optimization for service-oriented clouds. Softw Pract Exp 41(5):469–493Whitaker A, Cox RS, Shaw M, Gribble SD (2005) Rethinking the design of virtual machine monitors. IEEE Comput 38(5):57–62Wishart DMG (1969) A survey of control theory. J R Stat Soc Ser A-G 132(3):293–319Yataghene L, Amziani M, Ioualalen M, Tata S (2014) A queuing model for business processes elasticity evaluation. In: International Workshop on Advanced Information Systems for Enterprises (IWAISE), Tunis, Tunisia, pp 22–28Zawirski M, Preguiça N, Duarte S, Bieniusa A, Balegas V, Shapiro M (2015) Write fast, read in th

    Scheduling Problems

    Get PDF
    Scheduling is defined as the process of assigning operations to resources over time to optimize a criterion. Problems with scheduling comprise both a set of resources and a set of a consumers. As such, managing scheduling problems involves managing the use of resources by several consumers. This book presents some new applications and trends related to task and data scheduling. In particular, chapters focus on data science, big data, high-performance computing, and Cloud computing environments. In addition, this book presents novel algorithms and literature reviews that will guide current and new researchers who work with load balancing, scheduling, and allocation problems

    Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds

    Get PDF
    2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads

    Dynamic energy-aware scheduling for parallel task-based application in cloud computing

    Get PDF
    Green Computing is a recent trend in computer science, which tries to reduce the energy consumption and carbon footprint produced by computers on distributed platforms such as clusters, grids, and clouds. Traditional scheduling solutions attempt to minimize processing times without taking into account the energetic cost. One of the methods for reducing energy consumption is providing scheduling policies in order to allocate tasks on specific resources that impact over the processing times and energy consumption. In this paper, we propose a real-time dynamic scheduling system to execute efficiently task-based applications on distributed computing platforms in order to minimize the energy consumption. Scheduling tasks on multiprocessors is a well known NP-hard problem and optimal solution of these problems is not feasible, we present a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale. The proposed algorithm minimizes a multi-objective function which combines the energy-consumption and execution time according to the energy-performance importance factor provided by the resource provider or user, also taking into account sequence-dependent setup times between tasks, setup times and down times for virtual machines (VM) and energy profiles for different architectures. A prototype implementation of the scheduler has been tested with different kinds of DAG generated at random as well as on real task-based COMPSs applications. We have tested the system with different size instances and importance factors, and we have evaluated which combination provides a better solution and energy savings. Moreover, we have also evaluated the introduced overhead by measuring the time for getting the scheduling solutions for a different number of tasks, kinds of DAG, and resources, concluding that our method is suitable for run-time scheduling.This work has been supported by the Spanish Government (contracts TIN2015-65316-P, TIN2012-34557, CSD2007-00050, CAC2007-00052 and SEV-2011-00067), by Generalitat de Catalunya (contract 2014-SGR-1051), by the European Commission (Euroserver project, contract 610456) and by Consejo Nacional de Ciencia y Tecnología of Mexico (special program for postdoctoral position BSC-CNS-CONACYT contract 290790, grant number 265937).Peer ReviewedAward-winningPostprint (published version
    corecore