1,290 research outputs found

    Application-centric Resource Provisioning for Amazon EC2 Spot Instances

    Full text link
    In late 2009, Amazon introduced spot instances to offer their unused resources at lower cost with reduced reliability. Amazon's spot instances allow customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current spot price. The spot price changes periodically based on supply and demand, and customers whose bids exceed it gain access to the available spot instances. Customers may expect their services at lower cost with spot instances compared to on-demand or reserved. However the reliability is compromised since the instances(IaaS) providing the service(SaaS) may become unavailable at any time without any notice to the customer. Checkpointing and migration schemes are of great use to cope with such situation. In this paper we study various checkpointing schemes that can be used with spot instances. Also we device some algorithms for checkpointing scheme on top of application-centric resource provisioning framework that increase the reliability while reducing the cost significantly

    Optimization of Cloud Task Processing with Checkpoint-Restart Mechanism

    Get PDF
    International audienceIn this paper, we aim at optimizing fault-tolerance tech- niques based on a checkpointing/restart mechanism, in the context of cloud computing. Our contribution is three-fold. (1) We derive a fresh formula to compute the optimal num- ber of checkpoints for cloud jobs with varied distributions of failure events. Our analysis is not only generic with no assumption on failure probability distribution, but also at- tractively simple to apply in practice. (2) We design an adaptive algorithm to optimize the impact of checkpointing regarding various costs like checkpointing/restart overhead. (3) We evaluate our optimized solution in a real cluster en- vironment with hundreds of virtual machines and Berke- ley Lab Checkpoint/Restart tool. Task failure events are emulated via a production trace produced on a large-scale Google data center. Experiments confirm that our solution is fairly suitable for Google systems. Our optimized formula outperforms Young's formula by 3-10 percent, reducing wall- clock lengths by 50-100 seconds per job on average

    Predictive Reliability and Fault Management in Exascale Systems: State of the Art and Perspectives

    Get PDF
    © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Computing Surveys, Vol. 53, No. 5, Article 95. Publication date: September 2020. https://doi.org/10.1145/3403956[EN] Performance and power constraints come together with Complementary Metal Oxide Semiconductor technology scaling in future Exascale systems. Technology scaling makes each individual transistor more prone to faults and, due to the exponential increase in the number of devices per chip, to higher system fault rates. Consequently, High-performance Computing (HPC) systems need to integrate prediction, detection, and recovery mechanisms to cope with faults efficiently. This article reviews fault detection, fault prediction, and recovery techniques in HPC systems, from electronics to system level. We analyze their strengths and limitations. Finally, we identify the promising paths to meet the reliability levels of Exascale systems.This work has received funding from the European Union's Horizon 2020 (H2020) research and innovation program under the FET-HPC Grant Agreement No. 801137 (RECIPE). Jaume Abella was also partially supported by the Ministry of Economy and Competitiveness of Spain under Contract No. TIN2015-65316-P and under Ramon y Cajal Postdoctoral Fellowship No. RYC-2013-14717, as well as by the HiPEAC Network of Excellence. Ramon Canal is partially supported by the Generalitat de Catalunya under Contract No. 2017SGR0962.Canal, R.; HernĂĄndez Luz, C.; Tornero-GavilĂĄ, R.; Cilardo, A.; Massari, G.; Reghenzani, F.; Fornaciari, W.... (2020). Predictive Reliability and Fault Management in Exascale Systems: State of the Art and Perspectives. ACM Computing Surveys. 53(5):1-32. https://doi.org/10.1145/3403956S132535Abella, J., Hernandez, C., Quinones, E., Cazorla, F. J., Conmy, P. R., Azkarate-askasua, M., 
 Vardanega, T. (2015). WCET analysis methods: Pitfalls and challenges on their trustworthiness. 10th IEEE International Symposium on Industrial Embedded Systems (SIES). doi:10.1109/sies.2015.7185039E. Agullo L. Giraud A. Guermouche J. Roman and M. Zounon. 2013. Towards resilient parallel linear Krylov solvers: Recover-restart strategies. INRIA Research Report RR-8324. E. Agullo L. Giraud A. Guermouche J. Roman and M. Zounon. 2013. Towards resilient parallel linear Krylov solvers: Recover-restart strategies. INRIA Research Report RR-8324.Agullo, E., Giraud, L., Salas, P., & Zounon, M. (2016). Interpolation-Restart Strategies for Resilient Eigensolvers. SIAM Journal on Scientific Computing, 38(5), C560-C583. doi:10.1137/15m1042115Al-Qawasmeh, A. M., Pasricha, S., Maciejewski, A. A., & Siegel, H. J. (2015). Power and Thermal-Aware Workload Allocation in Heterogeneous Data Centers. IEEE Transactions on Computers, 64(2), 477-491. doi:10.1109/tc.2013.116ARM. 2017. ARM Reliability Availability and Serviceability (RAS) Specification—ARMv8 for the ARMv8-A Architecture Profile. White paper. Retrieved from https://developer.arm.com/docs/ddi0587/latest. ARM. 2017. ARM Reliability Availability and Serviceability (RAS) Specification—ARMv8 for the ARMv8-A Architecture Profile. White paper. Retrieved from https://developer.arm.com/docs/ddi0587/latest.Avizienis, A., Laprie, J.-C., Randell, B., & Landwehr, C. (2004). Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1), 11-33. doi:10.1109/tdsc.2004.2Bautista-Gomez, L., Zyulkyarov, F., Unsal, O., & McIntosh-Smith, S. (2016). Unprotected Computing: A Large-Scale Study of DRAM Raw Error Rate on a Supercomputer. SC16: International Conference for High Performance Computing, Networking, Storage and Analysis. doi:10.1109/sc.2016.54Berrocal, E., Bautista-Gomez, L., Di, S., Lan, Z., & Cappello, F. (2017). Toward General Software Level Silent Data Corruption Detection for Parallel Applications. IEEE Transactions on Parallel and Distributed Systems, 28(12), 3642-3655. doi:10.1109/tpds.2017.2735971M.-A. Breuer and A. D. Friedman. 1976. Diagnosis 8 Reliable Design of Digital Systems. Springer. M.-A. Breuer and A. D. Friedman. 1976. Diagnosis 8 Reliable Design of Digital Systems. Springer.P. Bridges K. Ferreira M. Heroux and M. Hoemmen. 2012. Fault-tolerant linear solvers via selective reliability. ArXiv e-prints June 2012. arXiv:1206.1390 [math.NA]. P. Bridges K. Ferreira M. Heroux and M. Hoemmen. 2012. Fault-tolerant linear solvers via selective reliability. ArXiv e-prints June 2012. arXiv:1206.1390 [math.NA].F. Cappello A. Geist W. Gropp S. Kale B. Kramer and M. Snir. 2014. Toward exascale resilience: 2014 update. Supercomput. Front. Innovat. 1 1 (2014). http://superfri.org/superfri/article/view/14. F. Cappello A. Geist W. Gropp S. Kale B. Kramer and M. Snir. 2014. Toward exascale resilience: 2014 update. Supercomput. Front. Innovat. 1 1 (2014). http://superfri.org/superfri/article/view/14.F. J. Cazorla L. Kosmidis E. Mezzetti C. Hernandez J. Abella and T. Vardanega. 2019. Probabilistic worst-case timing analysis: Taxonomy and comprehensive survey. ACM Comput. Surv. 52 1 Article 14 (Feb. 2019) 35 pages. DOI:https://doi.org/10.1145/3301283 F. J. Cazorla L. Kosmidis E. Mezzetti C. Hernandez J. Abella and T. Vardanega. 2019. Probabilistic worst-case timing analysis: Taxonomy and comprehensive survey. ACM Comput. Surv. 52 1 Article 14 (Feb. 2019) 35 pages. DOI:https://doi.org/10.1145/3301283Chan, C. S., Pan, B., Gross, K., Vaidyanathan, K., & Rosing, T. Ć . (2014). Correcting vibration-induced performance degradation in enterprise servers. ACM SIGMETRICS Performance Evaluation Review, 41(3), 83-88. doi:10.1145/2567529.2567555Chantem, T., Hu, X. S., & Dick, R. P. (2011). Temperature-Aware Scheduling and Assignment for Hard Real-Time Applications on MPSoCs. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 19(10), 1884-1897. doi:10.1109/tvlsi.2010.2058873Chen, M. Y., Kiciman, E., Fratkin, E., Fox, A., & Brewer, E. (s. f.). Pinpoint: problem determination in large, dynamic Internet services. Proceedings International Conference on Dependable Systems and Networks. doi:10.1109/dsn.2002.1029005Chen, Z. (2011). Algorithm-based recovery for iterative methods without checkpointing. Proceedings of the 20th international symposium on High performance distributed computing - HPDC ’11. doi:10.1145/1996130.1996142Chen, Z. (2013). Online-ABFT. Proceedings of the 18th ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP ’13. doi:10.1145/2442516.2442533Coskun, A. K., Rosing, T. S., Mihic, K., De Micheli, G., & Leblebici, Y. (2006). Analysis and Optimization of MPSoC Reliability. Journal of Low Power Electronics, 2(1), 56-69. doi:10.1166/jolpe.2006.007G. Da Costa A. Oleksiak W. Piatek J. Salom and L. SisĂł. 2015. Minimization of costs and energy consumption in a data center by a workload-based capacity management. In Energy Efficient Data Centers S. Klingert M. Chinnici and M. Rey Porto (Eds.). Springer International Publishing Cham 102--119. G. Da Costa A. Oleksiak W. Piatek J. Salom and L. SisĂł. 2015. Minimization of costs and energy consumption in a data center by a workload-based capacity management. In Energy Efficient Data Centers S. Klingert M. Chinnici and M. Rey Porto (Eds.). Springer International Publishing Cham 102--119.Cupertino, L., Da Costa, G., Oleksiak, A., Piažtek, W., Pierson, J.-M., Salom, J., 
 Zilio, T. (2015). Energy-efficient, thermal-aware modeling and simulation of data centers: The CoolEmAll approach and evaluation results. Ad Hoc Networks, 25, 535-553. doi:10.1016/j.adhoc.2014.11.002Dally, W. J. (1991). Express cubes: improving the performance of k-ary n-cube interconnection networks. IEEE Transactions on Computers, 40(9), 1016-1023. doi:10.1109/12.83652Dauwe, D., Pasricha, S., Maciejewski, A. A., & Siegel, H. J. (2018). Resilience-Aware Resource Management for Exascale Computing Systems. IEEE Transactions on Sustainable Computing, 3(4), 332-345. doi:10.1109/tsusc.2018.2797890R. I. Davis and A. Burns. 2011. A survey of hard real-time scheduling for multiprocessor systems. ACM Comput. Surv. 43 4 Article 35 (Oct. 2011) 44 pages. DOI:https://doi.org/10.1145/1978802.1978814 R. I. Davis and A. Burns. 2011. A survey of hard real-time scheduling for multiprocessor systems. ACM Comput. Surv. 43 4 Article 35 (Oct. 2011) 44 pages. DOI:https://doi.org/10.1145/1978802.1978814Di, S., & Cappello, F. (2016). Adaptive Impact-Driven Detection of Silent Data Corruption for HPC Applications. IEEE Transactions on Parallel and Distributed Systems, 27(10), 2809-2823. doi:10.1109/tpds.2016.2517639Di, S., Guo, H., Gupta, R., Pershey, E. R., Snir, M., & Cappello, F. (2019). Exploring Properties and Correlations of Fatal Events in a Large-Scale HPC System. IEEE Transactions on Parallel and Distributed Systems, 30(2), 361-374. doi:10.1109/tpds.2018.2864184Di, S., Robert, Y., Vivien, F., & Cappello, F. (2017). Toward an Optimal Online Checkpoint Solution under a Two-Level HPC Checkpoint Model. IEEE Transactions on Parallel and Distributed Systems, 28(1), 244-259. doi:10.1109/tpds.2016.2546248J. Dongarra T. Herault and Y. Robert. 2015. Fault Tolerance Techniques for High-Performance Computing. Springer. J. Dongarra T. Herault and Y. Robert. 2015. Fault Tolerance Techniques for High-Performance Computing. Springer.DOWNING, S., & SOCIE, D. (1982). Simple rainflow counting algorithms. International Journal of Fatigue, 4(1), 31-40. doi:10.1016/0142-1123(82)90018-4Eghbalkhah, B., Kamal, M., Afzali-Kusha, H., Afzali-Kusha, A., Ghaznavi-Ghoushchi, M. B., & Pedram, M. (2015). Workload and temperature dependent evaluation of BTI-induced lifetime degradation in digital circuits. Microelectronics Reliability, 55(8), 1152-1162. doi:10.1016/j.microrel.2015.06.004Gottscho, M., Shoaib, M., Govindan, S., Sharma, B., Wang, D., & Gupta, P. (2017). Measuring the Impact of Memory Errors on Application  Performance. IEEE Computer Architecture Letters, 16(1), 51-55. doi:10.1109/lca.2016.2599513Greenberg, A., Hamilton, J. R., Jain, N., Kandula, S., Kim, C., Lahiri, P., 
 Sengupta, S. (2011). VL2. Communications of the ACM, 54(3), 95-104. doi:10.1145/1897852.1897877Heroux, M. A., Bartlett, R. A., Howle, V. E., Hoekstra, R. J., Hu, J. J., Kolda, T. G., 
 Stanley, K. S. (2005). An overview of the Trilinos project. ACM Transactions on Mathematical Software, 31(3), 397-423. doi:10.1145/1089014.1089021Hoffmann, G. A., Trivedi, K. S., & Malek, M. (2007). A Best Practice Guide to Resource Forecasting for Computing Systems. IEEE Transactions on Reliability, 56(4), 615-628. doi:10.1109/tr.2007.909764Hsiao, M. Y., Carter, W. C., Thomas, J. W., & Stringfellow, W. R. (1981). Reliability, Availability, and Serviceability of IBM Computer Systems: A Quarter Century of Progress. IBM Journal of Research and Development, 25(5), 453-468. doi:10.1147/rd.255.0453Hughes, G. F., Murray, J. F., Kreutz-Delgado, K., & Elkan, C. (2002). Improved disk-drive failure warnings. IEEE Transactions on Reliability, 51(3), 350-357. doi:10.1109/tr.2002.802886S. Hukerikar and C. Engelmann. 2017. Resilience design patterns: A structured approach to resilience at extreme scale. Supercomput. Front. Innov. 4 3 (2017). DOI:https://doi.org/10.14529/jsfi170301 S. Hukerikar and C. Engelmann. 2017. Resilience design patterns: A structured approach to resilience at extreme scale. Supercomput. Front. Innov. 4 3 (2017). DOI:https://doi.org/10.14529/jsfi170301Hussain, H., Malik, S. U. R., Hameed, A., Khan, S. U., Bickler, G., Min-Allah, N., 
 Rayes, A. (2013). A survey on resource allocation in high performance distributed computing systems. Parallel Computing, 39(11), 709-736. doi:10.1016/j.parco.2013.09.009Intel Corporation. [n.d.]. Intel Xeon Processor E7 Family: Reliability Availability and Serviceability. White paper. https://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-family-ras-server-paper.html. Intel Corporation. [n.d.]. Intel Xeon Processor E7 Family: Reliability Availability and Serviceability. White paper. https://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-family-ras-server-paper.html.Jha, S., Formicola, V., Martino, C. D., Dalton, M., Kramer, W. T., Kalbarczyk, Z., & Iyer, R. K. (2018). Resiliency of HPC Interconnects: A Case Study of Interconnect Failures and Recovery in Blue Waters. IEEE Transactions on Dependable and Secure Computing, 15(6), 915-930. doi:10.1109/tdsc.2017.2737537Kiciman, E., & Fox, A. (2005). Detecting Application-Level Failures in Component-Based Internet Services. IEEE Transactions on Neural Networks, 16(5), 1027-1041. doi:10.1109/tnn.2005.853411Kim, T., Sun, Z., Cook, C., Zhao, H., Li, R., Wong, D., & Tan, S. X.-D. (2016). Invited - Cross-layer modeling and optimization for electromigration induced reliability. Proceedings of the 53rd Annual Design Automation Conference. doi:10.1145/2897937.2905010Kurowski, K., Oleksiak, A., Piątek, W., Piontek, T., Przybyszewski, A., & Węglarz, J. (2013). DCworms – A tool for simulation of energy efficiency in distributed computing infrastructures. Simulation Modelling Practice and Theory, 39, 135-151. doi:10.1016/j.simpat.2013.08.007Langou, J., Chen, Z., Bosilca, G., & Dongarra, J. (2008). Recovery Patterns for Iterative Methods in a Parallel Unstable Environment. SIAM Journal on Scientific Computing, 30(1), 102-116. doi:10.1137/040620394J. C. Laprie (Ed.). 1995. Dependability—Its Attributes Impairments and Means. Springer-Verlag Berlin. J. C. Laprie (Ed.). 1995. Dependability—Its Attributes Impairments and Means. Springer-Verlag Berlin.Laprie, J.-C. (s. f.). DEPENDABLE COMPUTING AND FAULT TOLERANCE : CONCEPTS AND TERMINOLOGY. Twenty-Fifth International Symposium on Fault-Tolerant Computing, 1995, ’ Highlights from Twenty-Five Years’. doi:10.1109/ftcsh.1995.532603Lasance, C. J. M. (2003). Thermally driven reliability issues in microelectronic systems: status-quo and challenges. Microelectronics Reliability, 43(12), 1969-1974. doi:10.1016/s0026-2714(03)00183-5Yinglung Liang, Yanyong Zhang, Sivasubramaniam, A., Jette, M., & Sahoo, R. (s. f.). BlueGene/L Failure Analysis and Prediction Models. International Conference on Dependable Systems and Networks (DSN’06). doi:10.1109/dsn.2006.18Lin, T.-T. Y., & Siewiorek, D. P. (1990). Error log analysis: statistical modeling and heuristic trend analysis. IEEE Transactions on Reliability, 39(4), 419-432. doi:10.1109/24.58720Losada, N., GonzĂĄlez, P., MartĂ­n, M. J., Bosilca, G., Bouteiller, A., & Teranishi, K. (2020). Fault tolerance of MPI applications in exascale systems: The ULFM solution. Future Generation Computer Systems, 106, 467-481. doi:10.1016/j.future.2020.01.026Lyons, R. E., & Vanderkulk, W. (1962). The Use of Triple-Modular Redundancy to Improve Computer Reliability. IBM Journal of Research and Development, 6(2), 200-209. doi:10.1147/rd.62.0200M. MĂ©dard and S. S. Lumetta. 2003. Network Reliability and Fault Tolerance. American Cancer Society. Retrieved from arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/0471219282.eot281. M. MĂ©dard and S. S. Lumetta. 2003. Network Reliability and Fault Tolerance. American Cancer Society. Retrieved from arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/0471219282.eot281.Moody, A., Bronevetsky, G., Mohror, K., & de Supinski, B. (2010). Detailed Modeling, Design, and Evaluation of a Scalable Multi-level Checkpointing System. doi:10.2172/984082Moor Insights 8 Strategy. 2017. AMD EPYC Brings New RAS Capability. White paper. Retrieved from https://www.amd.com/system/files/2017-06/AMD-EPYC-Brings-New-RAS-Capability.pdf. Moor Insights 8 Strategy. 2017. AMD EPYC Brings New RAS Capability. White paper. Retrieved from https://www.amd.com/system/files/2017-06/AMD-EPYC-Brings-New-RAS-Capability.pdf.Mulas, F., Atienza, D., Acquaviva, A., Carta, S., Benini, L., & De Micheli, G. (2009). Thermal Balancing Policy for Multiprocessor Stream Computing Platforms. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 28(12), 1870-1882. doi:10.1109/tcad.2009.2032372Oleksiak, A., Kierzynka, M., Piatek, W., Agosta, G., Barenghi, A., Brandolese, C., 
 Janssen, U. (2017). M2DC – Modular Microserver DataCentre with heterogeneous hardware. Microprocessors and Microsystems, 52, 117-130. doi:10.1016/j.micpro.2017.05.019Oxley, M. A., Jonardi, E., Pasricha, S., Maciejewski, A. A., Siegel, H. J., Burns, P. J., & Koenig, G. A. (2018). Rate-based thermal, power, and co-location aware resource management for heterogeneous data centers. Journal of Parallel and Distributed Computing, 112, 126-139. doi:10.1016/j.jpdc.2017.04.015K. O’brien I. Pietri R. Reddy A. Lastovetsky and R. Sakellariou. 2017. A survey of power and energy predictive models in HPC systems and applications. ACM Comput. Surv. 50 3 Article 37 (June 2017) 38 pages. DOI:https://doi.org/10.1145/3078811 K. O’brien I. Pietri R. Reddy A. Lastovetsky and R. Sakellariou. 2017. A survey of power and energy predictive models in HPC systems and applications. ACM Comput. Surv. 50 3 Article 37 (June 2017) 38 pages. DOI:https://doi.org/10.1145/3078811Park, S.-M., & Humphrey, M. (2011). Predictable High-Performance Computing Using Feedback Control and Admission Control. IEEE Transactions on Parallel and Distributed Systems, 22(3), 396-411. doi:10.1109/tpds.2010.100Pfefferman, J. D., & Cernuschi-Frias, B. (2002). A nonparametric nonstationary procedure for failure prediction. IEEE Transactions on Reliability, 51(4), 434-442. doi:10.1109/tr.2002.804733Rangan, K. K., Wei, G.-Y., & Brooks, D. (2009). Thread motion. ACM SIGARCH Computer Architecture News, 37(3), 302-313. doi:10.1145/1555815.1555793Paolo Rech. [n.d.]. Reliability Issues in Current and Future Supercomputers. Retrieved from http://energysfe.ufsc.br/slides/Paolo-Rech-260917.pdf. Paolo Rech. [n.d.]. Reliability Issues in Current and Future Supercomputers. Retrieved from http://energysfe.ufsc.br/slides/Paolo-Rech-260917.pdf.F. Reghenzani G. Massari and W. Fornaciari. 2019. The real-time Linux kernel: A survey on PREEMPT_RT. Comput. Surveys 52 1 Article 18 (Feb. 2019) 36 pages. DOI:https://doi.org/10.1145/3297714 F. Reghenzani G. Massari and W. Fornaciari. 2019. The real-time Linux kernel: A survey on PREEMPT_RT. Comput. Surveys 52 1 Article 18 (Feb. 2019) 36 pages. DOI:https://doi.org/10.1145/3297714F. Salfner M. Lenk and M. Malek. 2010. A survey of online failure prediction methods. ACM Comput. Surv. 42 3 Article 10 (March 2010) 42 pages. DOI:https://doi.org/10.1145/1670679.1670680 F. Salfner M. Lenk and M. Malek. 2010. A survey of online failure prediction methods. ACM Comput. Surv. 42 3 Article 10 (March 2010) 42 pages. DOI:https://doi.org/10.1145/1670679.1670680Salfner, F., Schieschke, M., & Malek, M. (2006). Predicting failures of computer systems: a case study for a telecommunication system. Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. doi:10.1109/ipdps.2006.1639672Shi, L., Chen, H., Sun, J., & Li, K. (2012). vCUDA: GPU-Accelerated High-Performance Computing in Virtual Machines. IEEE Transactions on Computers, 61(6), 804-816. doi:10.1109/tc.2011.112D. P. Siewiorek and R. S. Swarz. 1998. Reliable Computer Systems 3rd ed. A. K. Peters Ltd. D. P. Siewiorek and R. S. Swarz. 1998. Reliable Computer Systems 3rd ed. A. K. Peters Ltd.Singh, S., & Chana, I. (2016). A Survey on Resource Scheduling in Cloud Computing: Issues and Challenges. Journal of Grid Computing, 14(2), 217-264. doi:10.1007/s10723-015-9359-2Slegel, T. J., Averill, R. M., Check, M. A., Giamei, B. C., Krumm, B. W., Krygowski, C. A., 
 Webb, C. F. (1999). IBM’s S/390 G5 microprocessor design. IEEE Micro, 19(2), 12-23. doi:10.1109/40.755464Sridhar, A., Sabry, M. M., & Atienza, D. (2014). A Semi-Analytical Thermal Modeling Framework for Liquid-Cooled ICs. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 33(8), 1145-1158. doi:10.1109/tcad.2014.2323194Sridharan, V., DeBardeleben, N., Blanchard, S., Ferreira, K. B., Stearley, J., Shalf, J., & Gurumurthi, S. (2015). Memory Errors in Modern Systems. ACM SIGARCH Computer Architecture News, 43(1), 297-310. doi:10.1145/2786763.2694348Stathis, J. H. (2018). The physics of NBTI: What do we really know? 2018 IEEE International Reliability Physics Symposium (IRPS). doi:10.1109/irps.2018.8353539Stellner, G. (s. f.). CoCheck: checkpointing and process migration for MPI. Proceedings of International Conference on Parallel Processing. doi:10.1109/ipps.1996.508106Stone, J. E., Gohara, D., & Shi, G. (2010). OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems. Computing in Science & Engineering, 12(3), 66-73. doi:10.1109/mcse.2010.69Subasi, O., Di, S., Bautista-Gomez, L., Balaprakash, P., Unsal, O., Labarta, J., 
 Cappello, F. (2018). Exploring the capabilities of support vector machines in detecting silent data corruptions. Sustainable Computing: Informatics and Systems, 19, 277-290. doi:10.1016/j.suscom.2018.01.004Tang, D., & Iyer, R. K. (1993). Dependability measurement and modeling of a multicomputer system. IEEE Transactions on Computers, 42(1), 62-75. doi:10.1109/12.192214D. Turnbull and N. Alldrin. 2003. Failure Prediction in Hardware Systems. Tech. rep. University of California San Diego CA. Retrieved from http://www.cs.ucsd.edu/ dturnbul/Papers/ServerPrediction.pdf. D. Turnbull and N. Alldrin. 2003. Failure Prediction in Hardware Systems. Tech. rep. University of California San Diego CA. Retrieved from http://www.cs.ucsd.edu/ dturnbul/Papers/ServerPrediction.pdf.Vilalta, R., Apte, C. V., Hellerstein, J. L., Ma, S., & Weiss, S. M. (2002). Predictive algorithms in the management of computer systems. IBM Systems Journal, 41(3), 461-474. doi:10.1147/sj.413.0461Vinoski, S. (2007). Reliability with Erlang. IEEE Internet Com

    A survey on elasticity management in PaaS systems

    Full text link
    [EN] Elasticity is a goal of cloud computing. An elastic system should manage in an autonomic way its resources, being adaptive to dynamic workloads, allocating additional resources when workload is increased and deallocating resources when workload decreases. PaaS providers should manage resources of customer applications with the aim of converting those applications into elastic services. This survey identifies the requirements that such management imposes on a PaaS provider: autonomy, scalability, adaptivity, SLA awareness, composability and upgradeability. This document delves into the variety of mechanisms that have been proposed to deal with all those requirements. Although there are multiple approaches to address those concerns, providers main goal is maximisation of profits. This compels providers to look for balancing two opposed goals: maximising quality of service and minimising costs. Because of this, there are still several aspects that deserve additional research for finding optimal adaptability strategies. Those open issues are also discussed.This work has been partially supported by EU FEDER and Spanish MINECO under research Grant TIN2012-37719-C03-01.Muñoz-EscoĂ­, FD.; Bernabeu AubĂĄn, JM. (2017). A survey on elasticity management in PaaS systems. Computing. 99(7):617-656. https://doi.org/10.1007/s00607-016-0507-8S617656997Ajmani S (2004) Automatic software upgrades for distributed systems. PhD thesis, Department of Electrical and Computer Science, Massachusetts Institute of Technology, USAAjmani S, Liskov B, Shrira L (2006) Modular software upgrades for distributed systems. In: 20th European Conference on Object-Oriented Programming (ECOOP), Nantes, France, pp 452–476Alhamad M, Dillon TS, Chang E (2010) Conceptual SLA framework for cloud computing. In: 4th International Conference on Digital Ecosystems and Technologies (DEST), Dubai, pp 606–610Almeida S, LeitĂŁo J, Rodrigues LET (2013) ChainReaction: a causal+ consistent datastore based on chain replication. In: 8th EuroSys Conference, Prague, Czech Republic, pp 85–98Araujo J, Matos R, Maciel PRM, Matias R (2011) Software aging issues on the Eucalyptus cloud computing infrastructure. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), Anchorage, Alaska, USA, pp 1411–1416Arief LB, Speirs NA (2000) A UML tool for an automatic generation of simulation programs. In: Worshop on Software and Performance (WOSP), Ottawa, Canada, pp 71–76Armbrust M, Fox A, Griffith R, Joseph AD, Katz RH, Konwinski A, Lee G, Patterson DA, Rabkin A, Stoica I, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58Bailis P, Ghodsi A (2013) Eventual consistency today: limitations, extensions, and beyond. Commun ACM 56(5):55–63Bailis P, Ghodsi A, Hellerstein JM, Stoica I (2013) Bolt-on causal consistency. In: Intnl Conf Mgmnt Data (SIGMOD). NY, USA, New York, pp 761–772Balsamo S, Marco AD, Inverardi P, Simeoni M (2004) Model-based performance prediction in software development: a survey. IEEE Trans Softw Eng 30(5):295–310Barham P, Dragovic B, Fraser K, Hand S, Harris TL, Ho A, Neugebauer R, Pratt I, Warfield A (2003) Xen and the art of virtualization. In: 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA, pp 164–177Bennani MN, MenascĂ© DA (2005) Resource allocation for autonomic data centers using analytic performance models. In: 2nd Intnl Conf Auton Comput (ICAC), Seattle, WA, USA, pp 229–240Birman KP (1996) Building Secure and Reliable Network Applications. Manning Publications Co., ISBN 1-884777-29-5Bloom T (1983) Dynamic module replacement in a distributed programming system. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USABloom T, Day M (1993) Reconfiguration and module replacement in Argus: theory and practice. Softw Eng J 8(2):102–108Caballer M, Segrelles Quilis JD, MoltĂł G, Blanquer I (2015) A platform to deploy customized scientific virtual infrastructures on the cloud. Concurr Comput Pract E 27(16):4318–4329Calatrava A, Romero E, MoltĂł G, Caballer M, Alonso JM (2016) Self-managed cost-efficient virtual elastic clusters on hybrid cloud infrastructures. Future Gener Comp Syst 61:13–25Calcavecchia NM, Caprarescu BA, Nitto ED, Dubois DJ, Petcu D (2012) DEPAS: a decentralized probabilistic algorithm for auto-scaling. Computing 94(8–10):701–730Casalicchio E, Silvestri L (2013) Mechanisms for SLA provisioning in cloud-based service providers. Comput Netw 57(3):795–810Casalicchio E, MenascĂ© DA, Aldhalaan A (2013) Autonomic resource provisioning in cloud systems with availability goals. In: ACM Cloud Autonomic Computing Conference (CAC), FL, USA, Miami, pp 1–10Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M, Chandra T, Fikes A, Gruber RE (2008) Bigtable: a distributed storage system for structured data. ACM Trans Comput Syst 26(2):4Copil G, Trihinas D, Truong HL, Moldovan D, Pallis G, Dustdar S, Dikaiakos MD (2014) ADVISE—A framework for evaluating cloud service elasticity behavior. In: 12th International Conference on Service-Oriented Computing (ICSOC), France, Paris, pp 275–290Cotroneo D, Natella R, Pietrantuono R, Russo S (2014) A survey of software aging and rejuvenation studies. ACM J Emerg Technol 10(1):8:1–8:34Coutinho EF, de Carvalho Sousa FR, Rego PAL, Gomes DG, de Souza JN (2015) Elasticity in cloud computing: a survey. Ann Telecommun 70(15):289–309Dawoud W, Takouna I, Meinel C (2011) Elastic VM for cloud resources provisioning optimization. In: 1st International Conference on Advances in Computing and Communications (ACC), Kochi, India, pp 431–445de Juan-MarĂ­n R, Decker H, ArmendĂĄriz-ĂĂ±igo JE, BernabĂ©u-AubĂĄn JM, Muñoz-EscoĂ­FD (2015) Scalability approaches for causal multicast: a survey. Computing (in press)de Miguel M, Lambolais T, Hannouz M, BetgĂ©-Brezetz S, Piekarec S (2000) UML extensions for the specification and evaluation of latency constraints in architectural models. In: Workshop on Software and Performance (WOSP), Ottawa, Canada, pp 83–88Demers AJ, Greene DH, Hauser C, Irish W, Larson J, Shenker S, Sturgis HE, Swinehart DC, Terry DB (1987) Epidemic algorithms for replicated database maintenance. In: 6th ACM Symposium on Principles of Distributed Computing (PODC), Vancouver, Canada, pp 1–12Dustdar S, Guo Y, Satzger B, Truong HL (2011) Principles of elastic processes. IEEE Internet Comput 15(5):66–71Emeakaroha VC, Brandic I, Maurer M, Dustdar S (2013) Cloud resource provisioning and SLA enforcement via LoM2HiS framework. Concurr Comput Pract E 25(10):1462–1481Felter W, Ferreira A, Rajamony R, Rubio J (2015) An updated performance comparison of virtual machines and Linux containers. In: IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Philadelphia, PA, USA, pp 171–172Fox A, Brewer EA (1999) Harvest, yield and scalable tolerant systems. In: 7th Workshop on Hot Topics in Operating Systems (HotOS), Rio Rico, Arizona, USA, pp 174–178Galante G, De Bona LCE (2012) A survey on cloud computing elasticity. In: 5th International Conference on Utility and Cloud Computing (UCC), Chicago, IL, USA, pp 263–270Galante G, De Bona LCE, Mury AR, Schulze B, Righi RR (2016) An analysis of public clouds elasticity in the execution of scientific applications: a survey. J Grid Comput 14(2):193–216Gambi A, Hummer W, Truong HL, Dustdar S (2013) Testing elastic computing systems. IEEE Internet Comput 17(6):76–82Garg S, van Moorsel APA, Vaidyanathan K, Trivedi KS (1998) A methodology for detection and estimation of software aging. In: 9th International Symposium on Software Reliability Engineering (ISSRE), Paderborn, Germany, pp 283–292Gey F, Landuyt DV, Joosen W (2015) Middleware for customizable multi-staged dynamic upgrades of multi-tenant SaaS applications. In: 8th IEEE/ACM International Conference on Utility and Cloud Computing (UCC), Limassol, Cyprus, pp 102–111Gilbert S, Lynch NA (2002) Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2):51–59Gong Z, Gu X, Wilkes J (2010) PRESS: PRedictive Elastic reSource Scaling for cloud systems. In: 6th International Conference on Network and Service Management (CNSM), Niagara Falls, Canada, pp 9–16Grozev N, Buyya R (2014) Inter-cloud architectures and application brokering: taxonomy and survey. Softw Pract Exp 44(3):369–390Hammer M (2009) How to touch a running system. reconfiguration of stateful components. PhD thesis, FacultĂ€t fĂŒr Mathematik, Informatik und Statistik, Ludwig-Maximilians-UniversitĂ€t MĂŒnchen, Munich, GermanyHasan MZ, Magana E, Clemm A, Tucker L, Gudreddi SLD (2012) Integrated and autonomic cloud resource scaling. In: IEEE Network Operations and Management Symposium (NOMS), Maui, HI, USA, pp 1327–1334Herbst NR, Kounev S, Reussner R (2013) Elasticity in cloud computing: What it is, and what it is not. In: 10th International Conference on Autonomic Computing (ICAC), San Jose, CA, USA, pp 23–27Hermanns H, Herzog U, Katoen J (2002) Process algebra for performance evaluation. Theor Comput Sci 274(1–2):43–87Horn P (2001) Autonomic computing: IBM’s perspective on the state of information technology. Tech. rep. IBM PressHuebscher MC, McCann JA (2008) A survey of autonomic computing—degrees, models, and applications. ACM Comput Surv 40(3):7Hwang J, Zeng S, Wu F, Wood T (2013) A component-based performance comparison of four hypervisors. In: International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 269–276IBM (2006) An architectural blueprint for autonomic computing. White paper, 4th edIosup A, Ostermann S, Yigitbasi N, Prodan R, Fahringer T, Epema DHJ (2011) Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans Parallel Distrib Syst 22(6):931–945Ivanovic D, Carro M, Hermenegildo MV (2013) A sharing-based approach to supporting adaptation in service compositions. Computing 95(6):453–492Jiang Y, Perng C, Li T, Chang RN (2011) ASAP: A self-adaptive prediction system for instant cloud resource demand provisioning. In: 11th International Conference on Data Mining (ICDM), Vancouver, Canada, pp 1104–1109Johnson PR, Thomas RH (1975) The maintenance of duplicate databases. RFC 677, Network Working Group, Internet Engineering Task ForceKephart JO, Chess DM (2003) The vision of autonomic computing. IEEE Comput 36(1):41–50Kiviti A, Laor D, Costa G, Enberg P, Har’El N, Marti D, Zolotarov V (2014) OSv—Optimizing the operating system for virtual machines. In: USENIX Annual Technical Conference (ATC), Philadelphia, PA, USA, pp 61–72Knauth T, Fetzer C (2011) Scaling non-elastic applications using virtual machines. In: IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 468–475Knauth T, Fetzer C (2014) DreamServer: truly on-demand cloud services. In: International Conference on Systems and Storage (SYSTOR), Haifa, Israel, pp 1–11Kramer J, Magee J (1990) The evolving philosophers problem: dynamic change management. IEEE Trans Softw Eng 16(11):1293–1306Lakshman A, Malik P (2010) Cassandra: a decentralized structured storage system. Oper Syst Rev 44(2):35–40Lang W, Shankar S, Patel JM, Kalhan A (2014) Towards multi-tenant performance SLOs. IEEE Trans Knowl Data Eng 26(6):1447–1463Langner F, Andrzejak A (2013) Detecting software aging in a cloud computing framework by comparing development versions. In: IFIP/IEEE International Symposium on Integrated Network Management (IM), Ghent, Belgium, pp 896–899Lazowska ED, Zahorjan J, Graham GS, Sevcik KC (1984) Quantitative system performance. Computer system analysis using queueing network models. Prentice Hall, Upper Saddle RiverLeitner P, Michlmayr A, Rosenberg F, Dustdar S (2010) Monitoring, prediction and prevention of SLA violations in composite services. In: IEEE International Conference on Web Services (ICWS), Florida, USA, Miami, pp 369–376Li W (2011) Evaluating the impacts of dynamic reconfiguration on the QoS of running systems. J Syst Softw 84(12):2123–2138Lim HC, Babu S, Chase JS, Parekh SS (2009) Automated control in cloud computing: challenges and opportunities. In: 1st ACM Workshop Automated Control Datacenters Clouds (ACDC), Barcelona, Spain, pp 13–18Liu J, Zhou J, Buyya R (2015) Software rejuvenation based fault tolerance scheme for cloud applications. In: 8th IEEE International Conference on Cloud Computing (CLOUD), New York City, NY, USA, pp 1115–1118Lorido-Botran T, Miguel-Alonso J, Lozano JA (2014) A review of auto-scaling techniques for elastic applications in cloud environments. J Grid Comput 12(4):559–592Massie M, Li B, Nicholes B, Vuksan V, Alexander R, Buchbinder J, Costa F, Dean A, Josephsen D, Phaal P, Pocock D (2012) Monitoring with Ganglia. O’Reilly Media, Tracking Dynamic Host and Application Metrics at Scale. ISBN 978-1-4493-2970-9Matias R Jr, Andrzejak A, Machida F, Elias D, Trivedi KS (2014) A systematic differential analysis for fast and robust detection of software aging. In: 33rd IEEE Symposium on Reliable Distributed Systems (SRDS). Nara, Japan, pp 311–320Medina V, GarcĂ­a JM (2014) A survey of migration mechanisms of virtual machines. ACM Comput Surv 46(3):30Mell P, Grance T (2011) The NIST definition of cloud computing. Recommendations of the National Institute of Standards and Technology, Special Publication 800-145MenascĂ© DA, Bennani MN (2006) Autonomic virtualized environments. In: International Conference on Autonomic and Autonomous Systems (ICAS), Silicon Valley, California, USA, p 28MenascĂ© DA, Ngo P (2009) Understanding cloud computing: Experimentation and capacity planning. In: 35th International Computer Measurement Group Conference, Dallas, TX, USAMenascĂ© DA, Ruan H, Gomaa H (2007) QoS management in service-oriented architectures. Perform Eval 64(7–8):646–663Miedes E, Muñoz-EscoĂ­ FD (2010) Dynamic switching of total-order broadcast protocols. In: International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), Las Vegas, Nevada, USA, pp 457–463Mohamed M (2014) Generic monitoring and reconfiguration for service-based applications in the cloud. PhD thesis, UniversitĂ© d’Evry-Val d’Essonne, FranceMohamed M, Amziani M, BelaĂŻd D, Tata S, Melliti T (2015) An autonomic approach to manage elasticity of business processes in the cloud. Future Gener Comp Sys 50(C):49–61Mohd Yusoh ZI (2013) Composite SaaS resource management in cloud computing using evolutionary computation. PhD thesis, Sc Eng Faculty, Queensland University of Technology, Brisbane, AustraliaMontero RS, Moreno-Vozmediano R, Llorente IM (2011) An elasticity model for high throughput computing clusters. J Parallel Distrib Comput 71(6):750–757Morabito R, KjĂ€llman J, Komu M (2015) Hypervisors vs. lightweight virtualization: a performance comparison. In: IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, pp 386–393Najjar A, Serpaggi X, Gravier C, Boissier O (2014) Survey of elasticity management solutions in cloud computing. In: Mahmood Z (ed) Continued rise of the cloud: advances and trends in cloud computing. Springer, Berlin, pp 235–263Naskos A, Gounaris A, Sioutas S (2015) Cloud elasticity: a survey. In: 1st International Workshop on Algorithmic Aspects of Cloud Computing (ALGOCLOUD), Patras, Greece, pp 151–167Neamtiu I, Dumitras T (2011) Cloud software upgrades: challenges and opportunities. In: IEEE International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems (MESOCA), Williamsburg, VA, USA, pp 1–10Neuman BC (1994) Scale in distributed systems. In: Singhal M, Casavant TL (eds) Readings in Distributed computing systems. IEEE-CS Press, Los Alamitos, pp 463–489Padala P, Shin KG, Zhu X, Uysal M, Wang Z, Singhal S, Merchant A, Salem K (2007) Adaptive control of virtualized resources in utility computing environments. In: EuroSys Conference Lisbon, Portugal, pp 289–302Parnas DL (1994) Software aging. In: 6th International Conference on Software Engineering (ICSE), Sorrento, Italy, pp 279–287Parzen E (1960) A survey on time series analysis. Tech. rep., n. 37, Applied Mathematics and Statistics Laboratory, Stanford University, Stanford, CA, USAPascual-Miret L, GonzĂĄlez de MendĂ­vil JR, BernabĂ©u-AubĂĄn JM, Muñoz-EscoĂ­ FD (2015) Widening CAP consistency. Tech. rep., IUMTI-SIDI-2015/003, Univ. PolitĂšcnica de ValĂšncia, Valencia, SpainPopek GJ, Goldberg RP (1974) Formal requirements for virtualizable third generation architectures. Commun ACM 17(7):412–421Potter S, Nieh J (2005) AutoPod: Unscheduled system updates with zero data loss. In: 2nd International Conference on Autonomic Computing (ICAC), Seattle, WA, USA, pp 367–368Rajagopalan S (2014) System support for elasticity and high availability. PhD thesis, The University of British Columbia, Vancouver, CanadaReinecke P, Wolter K, van Moorsel APA (2010) Evaluating the adaptivity of computing systems. Perform Eval 67(8):676–693Rolia JA, Sevcik KC (1995) The method of layers. IEEE Trans Softw Eng 21(8):689–700Roy N, Dubey A, Gokhale AS (2011) Efficient autoscaling in the cloud using predictive models for workload forecasting. In: 4th IEEE International Conference on Cloud Computing (CLOUD), Washington, DC, USA, pp 500–507Ruiz-Fuertes MI, Muñoz-EscoĂ­ FD (2009) Performance evaluation of a metaprotocol for database replication adaptability. In: 28th IEEE Symposium on Reliable Distributed Systems (SRDS), Niagara Falls, New York, USA, pp 32–38Saito Y, Shapiro M (2005) Optimistic replication. ACM Comput Surv 37(1):42–81Seifzadeh H, Abolhassani H, Moshkenani MS (2013) A survey of dynamic software updating. J Softw Evol Process 25(5):535–568Sharma U, Shenoy PJ, Sahu S, Shaikh A (2011) A cost-aware elasticity provisioning system for the cloud. In: International Conference on Distributed Computing Systems (ICDCS), Minneapolis, Minnesota, USA, pp 559–570Shen M, Kshemkalyani AD, Hsu TY (2015) Causal consistency for geo-replicated cloud storage under partial replication. In: International Parallel and Distributed Processing Symposium (IPDPS) Workshop, Hyderabad, India, pp 509–518Shen Z, Subbiah S, Gu X, Wilkes J (2011) CloudScale: elastic resource scaling for multi-tenant cloud systems. In: ACM Symposium on Cloud Computing (SOCC), Cascais, Portugal, p 5Simoes R, Kamienski CA (2014) Elasticity management in private and hybrid clouds. In: 7th IEEE International Conference on Cloud Computing (CLOUD), Anchorage, AK, USA, pp 793–800Singh S, Chana I (2015) QoS-aware autonomic resource management in cloud computing: a systematic review. ACM Comput Surv 48(3):42:1–42:46Smith CU (1980) The prediction and evaluation of the performance of software from extended design specifications. PhD thesis, Department of Computer Science, The University of Texas at Austin, USASmith CU, Williams LG (2003) Software performance engineering. In: Lavagno L, Martin G, Selic B (eds) UML for real. Design of embedded real-time systems, chap 16. Springer, Berlin, pp 343–365Solarski M (2004) Dynamic upgrade of distributed software components. PhD thesis, FakultĂ€t IV Elektronik und Informatik, Technischen UniversitĂ€t Berlin, Berlin, GermanySoltesz S, Pötzl H, Fiuczynski ME, Bavier AC, Peterson LL (2007) Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors. In: European Conference, Lisbon, Portugal, pp 275–287Soules CAN, Appavoo J, Hui K, Wisniewski RW, Silva DD, Ganger GR, Krieger O, Stumm M, Auslander MA, Ostrowski M, Rosenburg BS, Xenidis J (2003) System support for online reconfiguration. In: USENIX Annual Technical Conference. San Antonio, Texas, USA, pp 141–154Sridharan S (2012) A performance comparison of hypervisors for cloud computing. Master Thesis (paper 269), School of Computing, University of North Florida, USAStonebraker M (1986) The case for shared nothing. IEEE Database Eng Bull 9(1):4–9Sun D, Guimarans D, Fekete A, Gramoli V, Zhu L (2015) Multi-objective optimisation of rolling upgrade allowing for failures in clouds. In: 34th IEEE Symposium on Reliable Distributed Systems (SRDS). Montreal, QC, Canada, pp 68–73Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. The MIT Press, CambridgeToosi AN, Calheiros RN, Buyya R (2014) Interconnected cloud computing environments: challenges, taxonomy, and survey. ACM Comput Surv 47(1):7:1–7:47Vaquero GonzĂĄlez LM, Rodero-Merino L, CĂĄceres J, Lindner MA (2009) A break in the clouds: towards a cloud definition. Comput Commun Rev 39(1):50–55Varrette S, Guzek M, Plugaru V, Besseron X, Bouvry P (2013) HPC performance and energy-efficiency of Xen, KVM and VMware hypervisors. In: 25th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). Porto de Galinhas, Pernambuco, Brazil, pp 89–96Vasic N, Novakovic DM, Miucin S, Kostic D, Bianchini R (2012) DejaVu: accelerating resource allocation in virtualized environments. In: 17th nternational Conference on Architectural Support for Programing Languages and Operating Systems (ASPLOS), London, UK, pp 423–436Vaughan-Nichols SJ (2006) New approach to virtualization is a lightweight. IEEE Comput 39(11):12–14Vogels W (2009) Eventually consistent. Commun ACM 52(1):40–44Wada H, Suzuki J, Yamano Y, Oba K (2011) Evolutionary deployment optimization for service-oriented clouds. Softw Pract Exp 41(5):469–493Whitaker A, Cox RS, Shaw M, Gribble SD (2005) Rethinking the design of virtual machine monitors. IEEE Comput 38(5):57–62Wishart DMG (1969) A survey of control theory. J R Stat Soc Ser A-G 132(3):293–319Yataghene L, Amziani M, Ioualalen M, Tata S (2014) A queuing model for business processes elasticity evaluation. In: International Workshop on Advanced Information Systems for Enterprises (IWAISE), Tunis, Tunisia, pp 22–28Zawirski M, Preguiça N, Duarte S, Bieniusa A, Balegas V, Shapiro M (2015) Write fast, read in th

    A dynamic task scheduler tolerant to multiple hibernations in cloud environments

    Get PDF
    International audienceCloud platforms usually offer several types of Virtual Machines (VMs) with different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in the Amazon EC2 cloud, the user pays per use for on-demand VMs while spot VMs are instances available at lower prices. However, a spot VM can be terminated or hibernated by EC2 at any moment. In this work, we propose the Hibernation-Aware Dynamic Scheduler (HADS) that schedules Bag-of-Tasks (BoT) applications with deadline constraints in both hibernation prone spots VMs and on-demand VMs. HADS aims at minimizing the monetary costs of executing BoT applications on Clouds ensuring that their deadlines are respected even in the presence of multiple hibernations. Results collected from experiments on Amazon EC2 VMs using synthetic applications and a NAS benchmark application show the effectiveness of HADS in terms of monetary costs when compared to on-demand VM only solutions

    Cost and fault-tolerant aware resource management for scientific workflows using hybrid instances on clouds

    Get PDF
    Cloud service providers are offering computing resources at a reasonable price as a pay-per-use model. Further, cloud service providers have also introduced different pricing models like spot, blockspot and spotfleet instances that are cost effective and user’s have to go through the bidding to balance the reliability and monetary costs. Henceforth, Scientific Workflows (SWf) that are used to model applications of high throughput, computation and complex large-scale data analysis are significantly adopting these computing resources. Nevertheless, spot instances are terminated when the market spot price exceeds the users bid price. Moreover, failures are inevitable in such a large distributed systems and often pose a challenge to design a fault-tolerant scheduling algorithm for SWf. This paper presents an efficient, low-cost and fault-tolerant scheduling algorithm and a bidding strategy to minimize the

    Big Data Application and System Co-optimization in Cloud and HPC Environment

    Get PDF
    The emergence of big data requires powerful computational resources and memory subsystems that can be scaled efficiently to accommodate its demands. Cloud is a new well-established computing paradigm that can offer customized computing and memory resources to meet the scalable demands of big data applications. In addition, the flexible pay-as-you-go pricing model offers opportunities for using large scale of resources with low cost and no infrastructure maintenance burdens. High performance computing (HPC) on the other hand also has powerful infrastructure that has potential to support big data applications. In this dissertation, we explore the application and system co-optimization opportunities to support big data in both cloud and HPC environments. Specifically, we explore the unique features of both application and system to seek overlooked optimization opportunities or tackle challenges that are difficult to be addressed by only looking at the application or system individually. Based on the characteristics of the workloads and their underlying systems to derive the optimized deployment and runtime schemes, we divide the workflow into four categories: 1) memory intensive applications; 2) compute intensive applications; 3) both memory and compute intensive applications; 4) I/O intensive applications.When deploying memory intensive big data applications to the public clouds, one important yet challenging problem is selecting a specific instance type whose memory capacity is large enough to prevent out-of-memory errors while the cost is minimized without violating performance requirements. In this dissertation, we propose two techniques for efficient deployment of big data applications with dynamic and intensive memory footprint in the cloud. The first approach builds a performance-cost model that can accurately predict how, and by how much, virtual memory size would slow down the application and consequently, impact the overall monetary cost. The second approach employs a lightweight memory usage prediction methodology based on dynamic meta-models adjusted by the application's own traits. The key idea is to eliminate the periodical checkpointing and migrate the application only when the predicted memory usage exceeds the physical allocation. When applying compute intensive applications to the clouds, it is critical to make the applications scalable so that it can benefit from the massive cloud resources. In this dissertation, we first use the Kirchhoff law, which is one of the most widely used physical laws in many engineering principles, as an example workload for our study. The key challenge of applying the Kirchhoff law to real-world applications at scale lies in the high, if not prohibitive, computational cost to solve a large number of nonlinear equations. In this dissertation, we propose a high-performance deep-learning-based approach for Kirchhoff analysis, namely HDK. HDK employs two techniques to improve the performance: (i) early pruning of unqualified input candidates which simplify the equation and select a meaningful input data range; (ii) parallelization of forward labelling which execute steps of the problem in parallel. When it comes to both memory and compute intensive applications in clouds, we use blockchain system as a benchmark. Existing blockchain frameworks exhibit a technical barrier for many users to modify or test out new research ideas in blockchains. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, which are not always available to researchers. In this dissertation, we develop an accurate and efficient emulating system to replay the execution of large-scale blockchain systems on tens of thousands of nodes in the cloud. For I/O intensive applications, we observe one important yet often neglected side effect of lossy scientific data compression. Lossy compression techniques have demonstrated promising results in significantly reducing the scientific data size while guaranteeing the compression error bounds, but the compressed data size is often highly skewed and thus impact the performance of parallel I/O. Therefore, we believe it is critical to pay more attention to the unbalanced parallel I/O caused by lossy scientific data compression

    Ambiguity and legal compliance

    Get PDF
    Research Summary: This study examines the independent and joint effect of ambiguity and perceived certainty of apprehension on law-breaking decision-making. Data come from a survey of experienced drivers (N = 1147) who viewed videos depicting a car speeding on an interstate highway under experimentally manipulated circumstances. The sampled drivers were generally ambiguity averse, opting to reduce speeding as ambiguity about the perceived certainty of apprehension increased. However, perceived ambiguity interacted with perceived certainty such that increases in ambiguity increased the deterrent effect of ambiguity for low certainty probabilities and decreased the effect for high probabilities. Policy Implications : Ambiguity may serve as a valuable tool for increasing the efficacy of crime-prevention strategies, especially for crimes with naturally low levels of risk. However, researchers should think carefully about the effects of ambiguity when analyzing the efficacy of certainty-based policies because the injection of ambiguity can both increase and decrease legal compliance. Also, discussed are the implications for a key function of policing —traffic safety. <br
    • 

    corecore