70 research outputs found

    Towards An Efficient Cloud Computing System: Data Management, Resource Allocation and Job Scheduling

    Get PDF
    Cloud computing is an emerging technology in distributed computing, and it has proved to be an effective infrastructure to provide services to users. Cloud is developing day by day and faces many challenges. One of challenges is to build cost-effective data management system that can ensure high data availability while maintaining consistency. Another challenge in cloud is efficient resource allocation which ensures high resource utilization and high SLO availability. Scheduling, referring to a set of policies to control the order of the work to be performed by a computer system, for high throughput is another challenge. In this dissertation, we study how to manage data and improve data availability while reducing cost (i.e., consistency maintenance cost and storage cost); how to efficiently manage the resource for processing jobs and increase the resource utilization with high SLO availability; how to design an efficient scheduling algorithm which provides high throughput, low overhead while satisfying the demands on completion time of jobs. Replication is a common approach to enhance data availability in cloud storage systems. Previously proposed replication schemes cannot effectively handle both correlated and non-correlated machine failures while increasing the data availability with the limited resource. The schemes for correlated machine failures must create a constant number of replicas for each data object, which neglects diverse data popularities and cannot utilize the resource to maximize the expected data availability. Also, the previous schemes neglect the consistency maintenance cost and the storage cost caused by replication. It is critical for cloud providers to maximize data availability hence minimize SLA (Service Level Agreement) violations while minimize cost caused by replication in order to maximize the revenue. In this dissertation, we build a nonlinear programming model to maximize data availability in both types of failures and minimize the cost caused by replication. Based on the model\u27s solution for the replication degree of each data object, we propose a low-cost multi-failure resilient replication scheme (MRR). MRR can effectively handle both correlated and non-correlated machine failures, considers data popularities to enhance data availability, and also tries to minimize consistency maintenance and storage cost. In current cloud, providers still need to reserve resources to allow users to scale on demand. The capacity offered by cloud offerings is in the form of pre-defined virtual machine (VM) configurations. This incurs resource wastage and results in low resource utilization when the users actually consume much less resource than the VM capacity. Existing works either reallocate the unused resources with no Service Level Objectives (SLOs) for availability\footnote{Availability refers to the probability of an allocated resource being remain operational and accessible during the validity of the contract~\cite{CarvalhoCirne14}.} or consider SLOs to reallocate the unused resources for long-running service jobs. This approach increases the allocated resource whenever it detects that SLO is violated in order to achieve SLO in the long term, neglecting the frequent fluctuations of jobs\u27 resource requirements in real-time application especially for short-term jobs that require fast responses and decision making for resource allocation. Thus, this approach cannot fully utilize the resources to process data because they cannot quickly adjust the resource allocation strategy dealing with the fluctuations of jobs\u27 resource requirements. What\u27s more, the previous opportunistic based resource allocation approach aims at providing long-term availability SLOs with good QoS for long-running jobs, which ensures that the jobs can be finished within weeks or months by providing slighted degraded resources with moderate availability guarantees, but it ignores deadline constraints in defining Quality of Service (QoS) for short-lived jobs requiring online responses in real-time application, thus it cannot truly guarantee the QoS and long-term availability SLOs. To overcome the drawbacks of previous works, we adequately consider the fluctuations of unused resource caused by bursts of jobs\u27 resource demands, and present a cooperative opportunistic resource provisioning (CORP) scheme to dynamically allocate the resource to jobs. CORP leverages complementarity of jobs\u27 requirements on different resource types and utilizes the job packing to reduce the resource wastage and increase the resource utilization. An increasing number of large-scale data analytics frameworks move towards larger degrees of parallelism aiming at high throughput. Scheduling that assigns tasks to workers and preemption that suspends low-priority tasks and runs high-priority tasks are two important functions in such frameworks. There are many existing works on scheduling and preemption in literature to provide high throughput. However, previous works do not substantially consider dependency in increasing throughput in scheduling or preemption. Considering dependency is crucial to increase the overall throughput. Besides, extensive task evictions for preemption increase context switches, which may decrease the throughput. To address the above problems, we propose an efficient scheduling system Dependency-aware Scheduling and Preemption (DSP) to achieve high throughput in scheduling and preemption. First, we build a mathematical model to minimize the makespan with the consideration of task dependency, and derive the target workers for tasks which can minimize the makespan; second, we utilize task dependency information to determine tasks\u27 priorities for preemption; finally, we present a probabilistic based preemption to reduce the numerous preemptions, while satisfying the demands on completion time of jobs. We conduct trace driven simulations on a real-cluster and real-world experiments on Amazon S3/EC2 to demonstrate the efficiency and effectiveness of our proposed system in comparison with other systems. The experimental results show the superior performance of our proposed system. In the future, we will further consider data update frequency to reduce consistency maintenance cost, and we will consider the effects of node joining and node leaving. Also we will consider energy consumption of machines and design an optimal replication scheme to improve data availability while saving power. For resource allocation, we will consider using the greedy approach for deep learning to reduce the computation overhead caused by the deep neural network. Also, we will additionally consider the heterogeneity of jobs (i.e., short jobs and long jobs), and use a hybrid resource allocation strategy to provide SLO availability customization for different job types while increasing the resource utilization. For scheduling, we will aim to handle scheduling tasks with partial dependency, worker failures in scheduling and make our DSP fully distributed to increase its scalability. Finally, we plan to use different workloads and real-world experiment to fully test the performance of our methods and make our preliminary system design more mature

    Variability in Behavior of Application Service Workload in a Utility Cloud

    Get PDF
    Using the elasticity feature of a utility cloud, users can acquire and release resources as required and pay for what they use. Applications with time-varying workloads can request for variable resources over time that makes cloud a convenient option for such applications. The elasticity in current IaaS cloud provides mainly two options to the users: horizontal and vertical scaling. In both ways of scaling the basic resource allocation unit is fixed-sized VM, it forces the cloud users to characterize their workload based on VM size, which might lead to under-utilization or over-allocation of resources. This turns out to be an inefficient model for both cloud users and providers. In this paper we discuss and calculate the variability in different kinds of application service workload. We also discuss different dynamic provisioning approaches proposed by researchers. We conclude with a brief introduction to the issues or limitations in existing solutions and our approach to resolve them in a way that is suitable and economic for both cloud user and provider

    Optimizing Virtual Resource Management in Cloud Datacenters

    Get PDF
    Datacenter clouds (e.g., Microsoft\u27s Azure, Google\u27s App Engine, and Amazon\u27s EC2) are emerging as a popular infrastructure for computing and storage due to their high scalability and elasticity. More and more companies and organizations shift their services (e.g., online social networks, Dropbox file hosting) to clouds to avoid large capital expenditures. Cloud systems employ virtualization technology to provide resources in physical machines (PMs) in the form of virtual machines (VMs). Users create VMs deployed on the cloud and each VM consumes resources (e.g., CPU, memory and bandwidth) from its host PM. Cloud providers supply services by signing Service Level Agreement (SLA) with cloud customers that serves as both the blueprint and the warranty for cloud computing. Under-provisioning of resources leads to SLA violations while over-provisioning of resources leads to resource underutilization and then revenue decrease for the cloud providers. Thus, a formidable challenge is effective management of virtual resource to maximize energy efficiency and resource utilization while satisfying the SLA. This proposal is devoted to tackle this challenge by addressing three fundamental and essential issues: i) initial VM allocation, ii) VM migration for load balance, and iii) proactive VM migration for long-term load balance. Accordingly, this proposal consists of three innovative components: (1) Initial Complementary VM Consolidation. Previous resource provisioning strategies either allocate physical resources to virtual machines (VMs) based on static VM resource demands or dynamically handle the variations in VM resource requirements through live VM migrations. However, the former fail to maximize energy efficiency and resource utilization while the latter produce high migration overhead. To handle these problems, we propose an initial VM allocation mechanism that consolidates complementary VMs with spatial/temporal-awareness. Complementary VMs are the VMs whose total demand of each resource dimension (in the spatial space) nearly reaches their host\u27s capacity during VM lifetime period (in the temporal space). Based on our observation of the existence of VM resource utilization patterns, the mechanism predicts the lifetime resource utilization patterns of short-term VMs or periodical resource utilization patterns of long-term VMs. Based on the predicted patterns, it coordinates the requirements of different resources and consolidates complementary VMs in the same physical machine (PM). This mechanism reduces the number of PMs needed to provide VM service hence increases energy efficiency and resource utilization and also reduces the number of VM migrations and SLA violations. (2) Resource Intensity Aware VM Migration for Load Balance. The unique features of clouds pose formidable challenges to achieving effective and efficient load balancing. First, VMs in clouds use different resources (e.g., CPU, bandwidth, memory) to serve a variety of services (e.g., high performance computing, web services, file services), resulting in different overutilized resources in different PMs. Also, the overutilized resources in a PM may vary over time due to the time-varying heterogenous service requests. Second, there is intensive network communication between VMs. However, previous load balancing methods statically assign equal or predefined weights to different resources, which leads to degraded performance in terms of speed and cost to achieve load balance. Also, they do not strive to minimize the VM communications between PMs. This proposed mechanism dynamically assigns different weights to different resources according to their usage intensity in the PM, which significantly reduces the time and cost to achieve load balance and avoids future load imbalance. It also tries to keep frequently communicating VMs in the same PM to reduce bandwidth cost, and migrate VMs to PMs with minimum VM performance degradation. (3) Proactive VM Migration for Long-Term Load Balance. Previous reactive load balancing algorithms migrate VMs upon the occurrence of load imbalance, while previous proactive load balancing algorithms predict PM overload to conduct VM migration. However, both methods cannot maintain long-term load balance and produce high overhead and delay due to migration VM selection and destination PM selection. To overcome these problems, we propose a proactive Markov Decision Process (MDP)-based load balancing algorithm. We handle the challenges of allying MDP in virtual resource management in cloud datacenters, which allows a PM to proactively find an optimal action to transit to a lightly loaded state that will maintain for a longer period of time. We also apply the MDP to determine destination PMs to achieve long-term PM load balance state. Our algorithm reduces the numbers of SLA violations by long-term load balance maintenance, and also reduces the load balancing overhead (e.g., CPU time, energy) and delay by quickly identifying VMs and destination PMs to migrate. Finally, we conducted extensive experiments to evaluate the proposed three mechanisms. i) We conducted simulation experiments based on two real traces and real-world testbed experiments to show that the initial complementary VM consolidation mechanism significantly reduces the number of PMs used, SLA violations and VM migrations of the previous resource provisioning strategies. ii) We conducted trace-driven simulation and real-world testbed experiments to show that RIAL outperforms other load balancing approaches in regards to the number of VM migrations, VM performance degradation and VM communication cost. iii) We conducted trace-driven experiments to show that the MDP-based load balancing algorithm outperforms previous reactive and proactive load balancing algorithms in terms of SLA violation, load balancing efficiency and long-term load balance maintenance

    A Tool for Managing the X1.V1 Platform on the Cloud

    Full text link

    A survey on elasticity management in PaaS systems

    Full text link
    [EN] Elasticity is a goal of cloud computing. An elastic system should manage in an autonomic way its resources, being adaptive to dynamic workloads, allocating additional resources when workload is increased and deallocating resources when workload decreases. PaaS providers should manage resources of customer applications with the aim of converting those applications into elastic services. This survey identifies the requirements that such management imposes on a PaaS provider: autonomy, scalability, adaptivity, SLA awareness, composability and upgradeability. This document delves into the variety of mechanisms that have been proposed to deal with all those requirements. Although there are multiple approaches to address those concerns, providers main goal is maximisation of profits. This compels providers to look for balancing two opposed goals: maximising quality of service and minimising costs. Because of this, there are still several aspects that deserve additional research for finding optimal adaptability strategies. Those open issues are also discussed.This work has been partially supported by EU FEDER and Spanish MINECO under research Grant TIN2012-37719-C03-01.Muñoz-Escoí, FD.; Bernabeu Aubán, JM. (2017). A survey on elasticity management in PaaS systems. Computing. 99(7):617-656. https://doi.org/10.1007/s00607-016-0507-8S617656997Ajmani S (2004) Automatic software upgrades for distributed systems. PhD thesis, Department of Electrical and Computer Science, Massachusetts Institute of Technology, USAAjmani S, Liskov B, Shrira L (2006) Modular software upgrades for distributed systems. In: 20th European Conference on Object-Oriented Programming (ECOOP), Nantes, France, pp 452–476Alhamad M, Dillon TS, Chang E (2010) Conceptual SLA framework for cloud computing. In: 4th International Conference on Digital Ecosystems and Technologies (DEST), Dubai, pp 606–610Almeida S, Leitão J, Rodrigues LET (2013) ChainReaction: a causal+ consistent datastore based on chain replication. In: 8th EuroSys Conference, Prague, Czech Republic, pp 85–98Araujo J, Matos R, Maciel PRM, Matias R (2011) Software aging issues on the Eucalyptus cloud computing infrastructure. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), Anchorage, Alaska, USA, pp 1411–1416Arief LB, Speirs NA (2000) A UML tool for an automatic generation of simulation programs. In: Worshop on Software and Performance (WOSP), Ottawa, Canada, pp 71–76Armbrust M, Fox A, Griffith R, Joseph AD, Katz RH, Konwinski A, Lee G, Patterson DA, Rabkin A, Stoica I, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58Bailis P, Ghodsi A (2013) Eventual consistency today: limitations, extensions, and beyond. Commun ACM 56(5):55–63Bailis P, Ghodsi A, Hellerstein JM, Stoica I (2013) Bolt-on causal consistency. In: Intnl Conf Mgmnt Data (SIGMOD). NY, USA, New York, pp 761–772Balsamo S, Marco AD, Inverardi P, Simeoni M (2004) Model-based performance prediction in software development: a survey. IEEE Trans Softw Eng 30(5):295–310Barham P, Dragovic B, Fraser K, Hand S, Harris TL, Ho A, Neugebauer R, Pratt I, Warfield A (2003) Xen and the art of virtualization. In: 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA, pp 164–177Bennani MN, Menascé DA (2005) Resource allocation for autonomic data centers using analytic performance models. In: 2nd Intnl Conf Auton Comput (ICAC), Seattle, WA, USA, pp 229–240Birman KP (1996) Building Secure and Reliable Network Applications. Manning Publications Co., ISBN 1-884777-29-5Bloom T (1983) Dynamic module replacement in a distributed programming system. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USABloom T, Day M (1993) Reconfiguration and module replacement in Argus: theory and practice. Softw Eng J 8(2):102–108Caballer M, Segrelles Quilis JD, Moltó G, Blanquer I (2015) A platform to deploy customized scientific virtual infrastructures on the cloud. Concurr Comput Pract E 27(16):4318–4329Calatrava A, Romero E, Moltó G, Caballer M, Alonso JM (2016) Self-managed cost-efficient virtual elastic clusters on hybrid cloud infrastructures. Future Gener Comp Syst 61:13–25Calcavecchia NM, Caprarescu BA, Nitto ED, Dubois DJ, Petcu D (2012) DEPAS: a decentralized probabilistic algorithm for auto-scaling. Computing 94(8–10):701–730Casalicchio E, Silvestri L (2013) Mechanisms for SLA provisioning in cloud-based service providers. Comput Netw 57(3):795–810Casalicchio E, Menascé DA, Aldhalaan A (2013) Autonomic resource provisioning in cloud systems with availability goals. In: ACM Cloud Autonomic Computing Conference (CAC), FL, USA, Miami, pp 1–10Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M, Chandra T, Fikes A, Gruber RE (2008) Bigtable: a distributed storage system for structured data. ACM Trans Comput Syst 26(2):4Copil G, Trihinas D, Truong HL, Moldovan D, Pallis G, Dustdar S, Dikaiakos MD (2014) ADVISE—A framework for evaluating cloud service elasticity behavior. In: 12th International Conference on Service-Oriented Computing (ICSOC), France, Paris, pp 275–290Cotroneo D, Natella R, Pietrantuono R, Russo S (2014) A survey of software aging and rejuvenation studies. ACM J Emerg Technol 10(1):8:1–8:34Coutinho EF, de Carvalho Sousa FR, Rego PAL, Gomes DG, de Souza JN (2015) Elasticity in cloud computing: a survey. Ann Telecommun 70(15):289–309Dawoud W, Takouna I, Meinel C (2011) Elastic VM for cloud resources provisioning optimization. In: 1st International Conference on Advances in Computing and Communications (ACC), Kochi, India, pp 431–445de Juan-Marín R, Decker H, Armendáriz-Íñigo JE, Bernabéu-Aubán JM, Muñoz-EscoíFD (2015) Scalability approaches for causal multicast: a survey. Computing (in press)de Miguel M, Lambolais T, Hannouz M, Betgé-Brezetz S, Piekarec S (2000) UML extensions for the specification and evaluation of latency constraints in architectural models. In: Workshop on Software and Performance (WOSP), Ottawa, Canada, pp 83–88Demers AJ, Greene DH, Hauser C, Irish W, Larson J, Shenker S, Sturgis HE, Swinehart DC, Terry DB (1987) Epidemic algorithms for replicated database maintenance. In: 6th ACM Symposium on Principles of Distributed Computing (PODC), Vancouver, Canada, pp 1–12Dustdar S, Guo Y, Satzger B, Truong HL (2011) Principles of elastic processes. IEEE Internet Comput 15(5):66–71Emeakaroha VC, Brandic I, Maurer M, Dustdar S (2013) Cloud resource provisioning and SLA enforcement via LoM2HiS framework. Concurr Comput Pract E 25(10):1462–1481Felter W, Ferreira A, Rajamony R, Rubio J (2015) An updated performance comparison of virtual machines and Linux containers. In: IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Philadelphia, PA, USA, pp 171–172Fox A, Brewer EA (1999) Harvest, yield and scalable tolerant systems. In: 7th Workshop on Hot Topics in Operating Systems (HotOS), Rio Rico, Arizona, USA, pp 174–178Galante G, De Bona LCE (2012) A survey on cloud computing elasticity. In: 5th International Conference on Utility and Cloud Computing (UCC), Chicago, IL, USA, pp 263–270Galante G, De Bona LCE, Mury AR, Schulze B, Righi RR (2016) An analysis of public clouds elasticity in the execution of scientific applications: a survey. J Grid Comput 14(2):193–216Gambi A, Hummer W, Truong HL, Dustdar S (2013) Testing elastic computing systems. IEEE Internet Comput 17(6):76–82Garg S, van Moorsel APA, Vaidyanathan K, Trivedi KS (1998) A methodology for detection and estimation of software aging. In: 9th International Symposium on Software Reliability Engineering (ISSRE), Paderborn, Germany, pp 283–292Gey F, Landuyt DV, Joosen W (2015) Middleware for customizable multi-staged dynamic upgrades of multi-tenant SaaS applications. In: 8th IEEE/ACM International Conference on Utility and Cloud Computing (UCC), Limassol, Cyprus, pp 102–111Gilbert S, Lynch NA (2002) Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2):51–59Gong Z, Gu X, Wilkes J (2010) PRESS: PRedictive Elastic reSource Scaling for cloud systems. In: 6th International Conference on Network and Service Management (CNSM), Niagara Falls, Canada, pp 9–16Grozev N, Buyya R (2014) Inter-cloud architectures and application brokering: taxonomy and survey. Softw Pract Exp 44(3):369–390Hammer M (2009) How to touch a running system. reconfiguration of stateful components. PhD thesis, Facultät für Mathematik, Informatik und Statistik, Ludwig-Maximilians-Universität München, Munich, GermanyHasan MZ, Magana E, Clemm A, Tucker L, Gudreddi SLD (2012) Integrated and autonomic cloud resource scaling. In: IEEE Network Operations and Management Symposium (NOMS), Maui, HI, USA, pp 1327–1334Herbst NR, Kounev S, Reussner R (2013) Elasticity in cloud computing: What it is, and what it is not. In: 10th International Conference on Autonomic Computing (ICAC), San Jose, CA, USA, pp 23–27Hermanns H, Herzog U, Katoen J (2002) Process algebra for performance evaluation. Theor Comput Sci 274(1–2):43–87Horn P (2001) Autonomic computing: IBM’s perspective on the state of information technology. Tech. rep. IBM PressHuebscher MC, McCann JA (2008) A survey of autonomic computing—degrees, models, and applications. ACM Comput Surv 40(3):7Hwang J, Zeng S, Wu F, Wood T (2013) A component-based performance comparison of four hypervisors. In: International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 269–276IBM (2006) An architectural blueprint for autonomic computing. White paper, 4th edIosup A, Ostermann S, Yigitbasi N, Prodan R, Fahringer T, Epema DHJ (2011) Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans Parallel Distrib Syst 22(6):931–945Ivanovic D, Carro M, Hermenegildo MV (2013) A sharing-based approach to supporting adaptation in service compositions. Computing 95(6):453–492Jiang Y, Perng C, Li T, Chang RN (2011) ASAP: A self-adaptive prediction system for instant cloud resource demand provisioning. In: 11th International Conference on Data Mining (ICDM), Vancouver, Canada, pp 1104–1109Johnson PR, Thomas RH (1975) The maintenance of duplicate databases. RFC 677, Network Working Group, Internet Engineering Task ForceKephart JO, Chess DM (2003) The vision of autonomic computing. IEEE Comput 36(1):41–50Kiviti A, Laor D, Costa G, Enberg P, Har’El N, Marti D, Zolotarov V (2014) OSv—Optimizing the operating system for virtual machines. In: USENIX Annual Technical Conference (ATC), Philadelphia, PA, USA, pp 61–72Knauth T, Fetzer C (2011) Scaling non-elastic applications using virtual machines. In: IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 468–475Knauth T, Fetzer C (2014) DreamServer: truly on-demand cloud services. In: International Conference on Systems and Storage (SYSTOR), Haifa, Israel, pp 1–11Kramer J, Magee J (1990) The evolving philosophers problem: dynamic change management. IEEE Trans Softw Eng 16(11):1293–1306Lakshman A, Malik P (2010) Cassandra: a decentralized structured storage system. Oper Syst Rev 44(2):35–40Lang W, Shankar S, Patel JM, Kalhan A (2014) Towards multi-tenant performance SLOs. IEEE Trans Knowl Data Eng 26(6):1447–1463Langner F, Andrzejak A (2013) Detecting software aging in a cloud computing framework by comparing development versions. In: IFIP/IEEE International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 896–899Lazowska ED, Zahorjan J, Graham GS, Sevcik KC (1984) Quantitative system performance. Computer system analysis using queueing network models. Prentice Hall, Upper Saddle RiverLeitner P, Michlmayr A, Rosenberg F, Dustdar S (2010) Monitoring, prediction and prevention of SLA violations in composite services. In: IEEE International Conference on Web Services (ICWS), Florida, USA, Miami, pp 369–376Li W (2011) Evaluating the impacts of dynamic reconfiguration on the QoS of running systems. J Syst Softw 84(12):2123–2138Lim HC, Babu S, Chase JS, Parekh SS (2009) Automated control in cloud computing: challenges and opportunities. In: 1st ACM Workshop Automated Control Datacenters Clouds (ACDC), Barcelona, Spain, pp 13–18Liu J, Zhou J, Buyya R (2015) Software rejuvenation based fault tolerance scheme for cloud applications. In: 8th IEEE International Conference on Cloud Computing (CLOUD), New York City, NY, USA, pp 1115–1118Lorido-Botran T, Miguel-Alonso J, Lozano JA (2014) A review of auto-scaling techniques for elastic applications in cloud environments. J Grid Comput 12(4):559–592Massie M, Li B, Nicholes B, Vuksan V, Alexander R, Buchbinder J, Costa F, Dean A, Josephsen D, Phaal P, Pocock D (2012) Monitoring with Ganglia. O’Reilly Media, Tracking Dynamic Host and Application Metrics at Scale. ISBN 978-1-4493-2970-9Matias R Jr, Andrzejak A, Machida F, Elias D, Trivedi KS (2014) A systematic differential analysis for fast and robust detection of software aging. In: 33rd IEEE Symposium on Reliable Distributed Systems (SRDS). Nara, Japan, pp 311–320Medina V, García JM (2014) A survey of migration mechanisms of virtual machines. ACM Comput Surv 46(3):30Mell P, Grance T (2011) The NIST definition of cloud computing. Recommendations of the National Institute of Standards and Technology, Special Publication 800-145Menascé DA, Bennani MN (2006) Autonomic virtualized environments. In: International Conference on Autonomic and Autonomous Systems (ICAS), Silicon Valley, California, USA, p 28Menascé DA, Ngo P (2009) Understanding cloud computing: Experimentation and capacity planning. In: 35th International Computer Measurement Group Conference, Dallas, TX, USAMenascé DA, Ruan H, Gomaa H (2007) QoS management in service-oriented architectures. Perform Eval 64(7–8):646–663Miedes E, Muñoz-Escoí FD (2010) Dynamic switching of total-order broadcast protocols. In: International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), Las Vegas, Nevada, USA, pp 457–463Mohamed M (2014) Generic monitoring and reconfiguration for service-based applications in the cloud. PhD thesis, Université d’Evry-Val d’Essonne, FranceMohamed M, Amziani M, Belaïd D, Tata S, Melliti T (2015) An autonomic approach to manage elasticity of business processes in the cloud. Future Gener Comp Sys 50(C):49–61Mohd Yusoh ZI (2013) Composite SaaS resource management in cloud computing using evolutionary computation. PhD thesis, Sc Eng Faculty, Queensland University of Technology, Brisbane, AustraliaMontero RS, Moreno-Vozmediano R, Llorente IM (2011) An elasticity model for high throughput computing clusters. J Parallel Distrib Comput 71(6):750–757Morabito R, Kjällman J, Komu M (2015) Hypervisors vs. lightweight virtualization: a performance comparison. In: IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, pp 386–393Najjar A, Serpaggi X, Gravier C, Boissier O (2014) Survey of elasticity management solutions in cloud computing. In: Mahmood Z (ed) Continued rise of the cloud: advances and trends in cloud computing. Springer, Berlin, pp 235–263Naskos A, Gounaris A, Sioutas S (2015) Cloud elasticity: a survey. In: 1st International Workshop on Algorithmic Aspects of Cloud Computing (ALGOCLOUD), Patras, Greece, pp 151–167Neamtiu I, Dumitras T (2011) Cloud software upgrades: challenges and opportunities. In: IEEE International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems (MESOCA), Williamsburg, VA, USA, pp 1–10Neuman BC (1994) Scale in distributed systems. In: Singhal M, Casavant TL (eds) Readings in Distributed computing systems. IEEE-CS Press, Los Alamitos, pp 463–489Padala P, Shin KG, Zhu X, Uysal M, Wang Z, Singhal S, Merchant A, Salem K (2007) Adaptive control of virtualized resources in utility computing environments. In: EuroSys Conference Lisbon, Portugal, pp 289–302Parnas DL (1994) Software aging. In: 6th International Conference on Software Engineering (ICSE), Sorrento, Italy, pp 279–287Parzen E (1960) A survey on time series analysis. Tech. rep., n. 37, Applied Mathematics and Statistics Laboratory, Stanford University, Stanford, CA, USAPascual-Miret L, González de Mendívil JR, Bernabéu-Aubán JM, Muñoz-Escoí FD (2015) Widening CAP consistency. Tech. rep., IUMTI-SIDI-2015/003, Univ. Politècnica de València, Valencia, SpainPopek GJ, Goldberg RP (1974) Formal requirements for virtualizable third generation architectures. Commun ACM 17(7):412–421Potter S, Nieh J (2005) AutoPod: Unscheduled system updates with zero data loss. In: 2nd International Conference on Autonomic Computing (ICAC), Seattle, WA, USA, pp 367–368Rajagopalan S (2014) System support for elasticity and high availability. PhD thesis, The University of British Columbia, Vancouver, CanadaReinecke P, Wolter K, van Moorsel APA (2010) Evaluating the adaptivity of computing systems. Perform Eval 67(8):676–693Rolia JA, Sevcik KC (1995) The method of layers. IEEE Trans Softw Eng 21(8):689–700Roy N, Dubey A, Gokhale AS (2011) Efficient autoscaling in the cloud using predictive models for workload forecasting. In: 4th IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 500–507Ruiz-Fuertes MI, Muñoz-Escoí FD (2009) Performance evaluation of a metaprotocol for database replication adaptability. In: 28th IEEE Symposium on Reliable Distributed Systems (SRDS), Niagara Falls, New York, USA, pp 32–38Saito Y, Shapiro M (2005) Optimistic replication. ACM Comput Surv 37(1):42–81Seifzadeh H, Abolhassani H, Moshkenani MS (2013) A survey of dynamic software updating. J Softw Evol Process 25(5):535–568Sharma U, Shenoy PJ, Sahu S, Shaikh A (2011) A cost-aware elasticity provisioning system for the cloud. In: International Conference on Distributed Computing Systems (ICDCS), Minneapolis, Minnesota, USA, pp 559–570Shen M, Kshemkalyani AD, Hsu TY (2015) Causal consistency for geo-replicated cloud storage under partial replication. In: International Parallel and Distributed Processing Symposium (IPDPS) Workshop, Hyderabad, India, pp 509–518Shen Z, Subbiah S, Gu X, Wilkes J (2011) CloudScale: elastic resource scaling for multi-tenant cloud systems. In: ACM Symposium on Cloud Computing (SOCC), Cascais, Portugal, p 5Simoes R, Kamienski CA (2014) Elasticity management in private and hybrid clouds. In: 7th IEEE International Conference on Cloud Computing (CLOUD), Anchorage, AK, USA, pp 793–800Singh S, Chana I (2015) QoS-aware autonomic resource management in cloud computing: a systematic review. ACM Comput Surv 48(3):42:1–42:46Smith CU (1980) The prediction and evaluation of the performance of software from extended design specifications. PhD thesis, Department of Computer Science, The University of Texas at Austin, USASmith CU, Williams LG (2003) Software performance engineering. In: Lavagno L, Martin G, Selic B (eds) UML for real. Design of embedded real-time systems, chap 16. Springer, Berlin, pp 343–365Solarski M (2004) Dynamic upgrade of distributed software components. PhD thesis, Fakultät IV Elektronik und Informatik, Technischen Universität Berlin, Berlin, GermanySoltesz S, Pötzl H, Fiuczynski ME, Bavier AC, Peterson LL (2007) Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. In: European Conference, Lisbon, Portugal, pp 275–287Soules CAN, Appavoo J, Hui K, Wisniewski RW, Silva DD, Ganger GR, Krieger O, Stumm M, Auslander MA, Ostrowski M, Rosenburg BS, Xenidis J (2003) System support for online reconfiguration. In: USENIX Annual Technical Conference. San Antonio, Texas, USA, pp 141–154Sridharan S (2012) A performance comparison of hypervisors for cloud computing. Master Thesis (paper 269), School of Computing, University of North Florida, USAStonebraker M (1986) The case for shared nothing. IEEE Database Eng Bull 9(1):4–9Sun D, Guimarans D, Fekete A, Gramoli V, Zhu L (2015) Multi-objective optimisation of rolling upgrade allowing for failures in clouds. In: 34th IEEE Symposium on Reliable Distributed Systems (SRDS). Montreal, QC, Canada, pp 68–73Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. The MIT Press, CambridgeToosi AN, Calheiros RN, Buyya R (2014) Interconnected cloud computing environments: challenges, taxonomy, and survey. ACM Comput Surv 47(1):7:1–7:47Vaquero González LM, Rodero-Merino L, Cáceres J, Lindner MA (2009) A break in the clouds: towards a cloud definition. Comput Commun Rev 39(1):50–55Varrette S, Guzek M, Plugaru V, Besseron X, Bouvry P (2013) HPC performance and energy-efficiency of Xen, KVM and VMware hypervisors. In: 25th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). Porto de Galinhas, Pernambuco, Brazil, pp 89–96Vasic N, Novakovic DM, Miucin S, Kostic D, Bianchini R (2012) DejaVu: accelerating resource allocation in virtualized environments. In: 17th nternational Conference on Architectural Support for Programing Languages and Operating Systems (ASPLOS), London, UK, pp 423–436Vaughan-Nichols SJ (2006) New approach to virtualization is a lightweight. IEEE Comput 39(11):12–14Vogels W (2009) Eventually consistent. Commun ACM 52(1):40–44Wada H, Suzuki J, Yamano Y, Oba K (2011) Evolutionary deployment optimization for service-oriented clouds. Softw Pract Exp 41(5):469–493Whitaker A, Cox RS, Shaw M, Gribble SD (2005) Rethinking the design of virtual machine monitors. IEEE Comput 38(5):57–62Wishart DMG (1969) A survey of control theory. J R Stat Soc Ser A-G 132(3):293–319Yataghene L, Amziani M, Ioualalen M, Tata S (2014) A queuing model for business processes elasticity evaluation. In: International Workshop on Advanced Information Systems for Enterprises (IWAISE), Tunis, Tunisia, pp 22–28Zawirski M, Preguiça N, Duarte S, Bieniusa A, Balegas V, Shapiro M (2015) Write fast, read in th

    The Making of Cloud Applications An Empirical Study on Software Development for the Cloud

    Full text link
    Cloud computing is gaining more and more traction as a deployment and provisioning model for software. While a large body of research already covers how to optimally operate a cloud system, we still lack insights into how professional software engineers actually use clouds, and how the cloud impacts development practices. This paper reports on the first systematic study on how software developers build applications in the cloud. We conducted a mixed-method study, consisting of qualitative interviews of 25 professional developers and a quantitative survey with 294 responses. Our results show that adopting the cloud has a profound impact throughout the software development process, as well as on how developers utilize tools and data in their daily work. Among other things, we found that (1) developers need better means to anticipate runtime problems and rigorously define metrics for improved fault localization and (2) the cloud offers an abundance of operational data, however, developers still often rely on their experience and intuition rather than utilizing metrics. From our findings, we extracted a set of guidelines for cloud development and identified challenges for researchers and tool vendors

    Towards Agile Scalability Engineering

    Get PDF
    Scalability engineering is currently not well integrated into agile development techniques. This paper extends agile development techniques so that scalability can be handled in an incremental and iterative development process. By scalability we mean the ability of a system to handle increasing workload. We propose the ScrumScale Method which includes scalability engineering in Scrum. This extension should also be applicable to other agile techniques. For scalability testing, we indicate how quality thresholds should be scaled up or down according to the degree of completeness of the product, test hardware, test software, test data and test workload. Using action research, we have conducted three pilots in three Norwegian software organizations. These three pilots have different architectures and operate in different markets yet have in common scalability challengespublishedVersio

    Virtualized application performance prediction using system metrics

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 79-80).Virtualized datacenter administrators would like to consolidate virtual machines (VMs) onto as few physical hosts as possible in order to decrease costs, but must leave enough physical resources for each VM to meet application service level objectives (SLOs). The threshold between good and bad performance in terms of resource settings, however, is hard to determine and rarely static due to changing workloads and resource usage. Thus, in order to avoid SLO violations, system administrators must err on the side of caution by running fewer VMs per host than necessary or setting reservations, which prevents resources from being shared. To ameliorate this situation, we are working to design and implement a system that automatically monitors VM-level metrics to predict impending application SLO violations, and takes appropriate action to prevent the SLO violation from occurring. So far we have implemented the performance prediction, which is detailed in this document, while the preventative actions are left as future work. We created a three-stage pipeline in order to achieve scalable performance prediction. The three stages are prediction, which predicts future VM ESX performance counter values based on current time-series data; aggregation, which aggregates the predicted VM metrics into a single set of global metrics; and finally classification, which for each VM classifies its performance as good or bad based on the predicted VM counters and the predicted global state. Prediction of each counter is performed by a least-squares linear fit, aggregation is performed simply by summing each counter across all VMs, and classification is performed using a support vector machine (SVM) for each application. In addition, we created an experimental system running a MongoDB instance in order to test and evaluate our pipeline implementation. Our results on this experimental system are promising, but further research will be necessary before applying these techniques to a production system.by Skye A. Wanderman-Milne.M.Eng

    RHAS: robust hybrid auto-scaling for web applications in cloud computing

    Get PDF
    • …
    corecore