2 research outputs found

    Modeling and Testing Implementations of Protocols with Complex Messages

    Get PDF
    This paper presents a new language called APSL for formally describing protocols to facilitate automated testing. Many real world communication protocols exchange messages whose structures are not trivial, e.g. they may consist of multiple and nested fields, some could be optional, and some may have values that depend on other fields. To properly test implementations of such a protocol, it is not sufficient to only explore different orders of sending and receiving messages. We also need to investigate if the implementation indeed produces correctly formatted messages, and if it responds correctly when it receives different variations of every message type. APSL's main contribution is its sublanguage that is expressive enough to describe complex message formats, both text-based and binary. As an example, this paper also presents a case study where APSL is used to model and test a subset of Courier IMAP email server

    The FITTEST Tool Suite for Testing Future Internet Applications

    Full text link
    Future Internet applications are expected to be much more complex and powerful, by exploiting various dynamic capabilities For testing, this is very challenging, as it means that the range of possible behavior to test is much larger, and moreover it may at the run time change quite frequently and significantly with respect to the assumed behavior tested prior to the release of such an application. The traditional way of testing will not be able to keep up with such dynamics. The Future Internet Testing (FITTEST) project (http://​crest.​cs.​ucl.​ac.​uk/​fittest/​), a research project funded by the European Commission (grant agreement n. 257574) from 2010 till 2013, was set to explore new testing techniques that will improve our capacity to deal with the challenges of testing Future Internet applications. Such techniques should not be seen as replacement of the traditional testing, but rather as a way to complement it. This paper gives an overview of the set of tools produced by the FITTEST project, implementing those techniques.This work has been funded hy the European Uninn FP7 project FITTEST (grant agreement n. 257574).Vos, TE.; Tonella, P.; Prasetya, W.; Kruse, P.; Shehory, O.; Bagnato, A.; Harman, M. (2014). The FITTEST Tool Suite for Testing Future Internet Applications. En Future Internet Testing. First International Workshop, FITTEST 2013, Istanbul, Turkey, November 12, 2013. Springer. 1-31. https://doi.org/10.1007/978-3-319-07785-7_1S131Vos, T., Tonella, P., Wegener, J., Harman, M., Prasetya, I.S.W.B., Ur, S.: Testing of future internet applications running in the cloud. In: Tilley, S., Parveen, T. (eds.) Software Testing in the Cloud: Perspectives on an Emerging Discipline, pp. 305–321. IGI Global, Hershey (2013)Prasetya, I.S.W.B., Elyasov, A., Middelkoop, A., Hage, J.: FITTEST log format (version 1.1). Technical report UUCS-2012-014, Utrecht University (2012)Middelkoop, A., Elyasov, A.B., Prasetya, W.: Functional instrumentation of ActionScript programs with Asil. In: Gill, A., Hage, J. (eds.) IFL 2011. LNCS, vol. 7257, pp. 1–16. Springer, Heidelberg (2012)Swierstra, S.D., et al.: UU Attribute Grammar System (1998). https://www.cs.uu.nl/foswiki/HUT/AttributeGrammarSystemDias Neto, A.C., Subramanyan, R., Vieira, M., Travassos, G.H.: A survey on model-based testing approaches: a systematic review. In: 1st ACM International Workshop on Empirical Assessment of Software Engineering Languages and Technologies, pp. 31–36. ACM, New York (2007)Shafique, M., Labiche, Y.: A systematic review of model based testing tool support. Technical report SCE-10-04, Carleton University, Canada (2010)Marchetto, A., Tonella, P., Ricca, F.: ReAjax: a reverse engineering tool for Ajax web applications. Softw. IET 6(1), 33–49 (2012)Babenko, A., Mariani, L., Pastore, F.: AVA: automated interpretation of dynamically detected anomalies. In: Proceedings of the International Symposium on Software Testing and Analysis (2009)Dallmeier, V., Lindig, C., Wasylkowski, A., Zeller, A.: Mining object behavior with ADABU. In: Proceedings of the International Workshop on Dynamic Systems Analysis (2006)Mariani, L., Marchetto, A., Nguyen, C.D., Tonella, P., Baars, A.I.: Revolution: automatic evolution of mined specifications. In: ISSRE, pp. 241–250 (2012)Nguyen, C.D., Tonella, P.: Automated inference of classifications and dependencies for combinatorial testing. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, ASE (2013)Ernst, M.D., Perkins, J.H., Guo, P.J., McCamant, S., Pacheco, C., Tschantz, M.S., Xiao, C.: The daikon system for dynamic detection of likely invariants. Sci. Comput. Program. 69, 35–45 (2007)Elyasov, A., Prasetya, I.S.W.B., Hage, J.: Guided algebraic specification mining for failure simplification. In: Yenigün, H., Yilmaz, C., Ulrich, A. (eds.) ICTSS 2013. LNCS, vol. 8254, pp. 223–238. Springer, Heidelberg (2013)Elyasov, A., Prasetya, I.S.W.B., Hage, J.: Log-based reduction by rewriting. Technical report UUCS-2012-013, Utrecht University (2012)Prasetya, I.S.W.B., Hage, J., Elyasov, A.: Using sub-cases to improve log-based oracles inference. Technical report UUCS-2012-012, Utrecht University (2012)Anon.: The daikon invariant detector user manual (2010). https://groups.csail.mit.edu/pag/daikon/download/doc/daikon.htmlNguyen, C.D., Marchetto, A., Tonella, P.: Combining model-based and combinatorial testing for effective test case generation. In: Proceedings of the 2012 International Symposium on Software Testing and Analysis, pp. 100–110. ACM (2012)Tonella, P.: FITTEST deliverable D4.3: test data generation and UML2 profile (2013)Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Proceedings of the 13th Conference on Foundations of Software Engineering, ESEC/FSE, pp. 416–419. ACM, New York (2011)Rothermel, G., Harrold, M.J.: A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. 6(2), 173–210 (1997)Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27, 929–948 (2001)W3C: Web service description language (WSDL). Technical report. http://www.w3.org/tr/wsdl20 . Accessed Dec 2010Nguyen, D.C., Marchetto, A., Tonella, P.: Change sensitivity based prioritization for audit testing of webservice compositions. In: Proceedings of the 6th International Workshop on Mutation Analysis (co-located with ICST), pp. 357–365 (2011)Ludwig, H., Keller, A., Dan, A., King, R., Franck, R.: A service level agreement language for dynamic electronic services. Electron. Commer. Res. 3, 43–59 (2003). doi: 10.1023/A:1021525310424W3C: XML path language (XPath). Technical report (1999). http://www.w3.org/tr/xpath/W3C: XML schema. Technical report. http://www.w3.org/xml/schema . Accessed Dec 2010Cohen, M.B., Snyder, J., Rothermel, G.: Testing across configurations: implications for combinatorial testing. SIGSOFT Softw. Eng. Notes 31, 1–9 (2006)Kuhn, D.R., Wallace, D.R., Gallo, A.M.: Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng. 30, 418–421 (2004)Grochtmann, M., Grimm, K.: Classification trees for partition testing. Softw. Test. Verif. Reliab. 3(2), 63–82 (1993)Kruse, P.M., Bauer, J., Wegener, J.: Numerical constraints for combinatorial interaction testing. In: Proceedings of ICST 2012 Workshops (ICSTW 2012), Montreal, Canada (2012)Grochtmann, M., Wegener, J.: Test case design using classification trees and the classification-tree editor CTE. In: Proceedings of the 8th International Software Quality Week, San Francisco, USA (1995)Lehmann, E., Wegener, J.: Test case design by means of the CTE XL. In: Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000), Kopenhagen, Denmark, Citeseer (2000)Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. 43, 11:1–11:29 (2011)Kruse, P.M., Luniak, M.: Automated test case generation using classification trees. Softw. Qual. Prof. 13(1), 4–12 (2010)Kruse, P.M., Schieferdecker, I.: Comparison of approaches to prioritized test generation for combinatorial interaction testing. In: Federated Conference on Computer Science and Information Systems (FedCSIS) 2012, Wroclaw, Poland (2012)Kruse, P.M., Wegener, J.: Test sequence generation from classification trees. In: Proceedings of ICST 2012 Workshops (ICSTW 2012), Montreal, Canada (2012)Kruse, P.M., Lakhotia, K.: Multi objective algorithms for automated generation of combinatorial test cases with the classification tree method. In: Symposium on Search Based Software Engineering (SSBSE 2011) (2011)Ferrer, J., Kruse, P.M., Chicano, J.F., Alba, E.: Evolutionary algorithm for prioritized pairwise test data generation. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO) 2012, Philadelphia, USA (2012)Prasetya, I.S.W.B., Amorim, J., Vos, T., Baars, A.: Using Haskell to script combinatoric testing of web services. In: 6th Iberian Conference on Information Systems and Technologies (CISTI). IEEE (2011)Cohen, D.M., Dalal, S.R., Fredman, M.L., Patton, G.C.: The AETG system: an approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)Cohen, M.B., Gibbons, P.B., Mugridge, W.B., Colbourn, C.J.: Constructing test suites for interaction testing. In: Proceedings of the 25th International Conference on Software Engineering, ICSE ’03, pp. 38–48. IEEE Computer Society, Washington, DC (2003)Hnich, B., Prestwich, S., Selensky, E., Smith, B.: Constraint models for the covering test problem. Constraints 11, 199–219 (2006)Lei, Y., Tai, K.: In-parameter-order: a test generation strategy for pairwise testing. In: Proceedings of the 3rd IEEE International Symposium on High-Assurance Systems Engineering, 1998, pp. 254–261 (1998)Garvin, B., Cohen, M., Dwyer, M.: Evaluating improvements to a meta-heuristic search for constrained interaction testing. Emp. Softw. Eng. 16(1), 61–102 (2011)Calvagna, A., Gargantini, A.: A formal logic approach to constrained combinatorial testing. J. Autom. Reasoning 45, 331–358 (2010)Jia, Y., Cohen, M.B., Harman, M., Petke, J.: Learning combinatorial interaction testing strategies using hyperheuristic search. Technical report RN/13/17, Department of Computer Sciences, University of College London (2013)Harman, M., Burke, E., Clark, J., Yao, X.: Dynamic adaptive search based software engineering. In: Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’12, pp. 1–8 (2012)Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Ozcan, E., Qu, R.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)Bauersfeld, S., Vos, T.E.J.: GUITest: a Java library for fully automated GUI robustness testing. In: Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, ASE 2012, pp. 330–333. ACM, New York (2012)Bauersfeld, S., Vos, T.E.: A reinforcement learning approach to automated GUI robustness testing. In: 4th Symposium on Search Based-Software Engineering, p. 7 (2012)Edelstein, O., Farchi, E., Goldin, E., Nir, Y., Ratsaby, G., Ur, S.: Framework for testing multithreaded java programs. Concur. Comput. Pract. Exp. 15(3–5), 485–499 (2003)Juristo, N., Moreno, A., Vegas, S.: Reviewing 25 years of testing technique experiments. Emp. Softw. Eng. 9(1–2), 7–44 (2004)Hesari, S., Mashayekhi, H., Ramsin, R.: Towards a general framework for evaluating software development methodologies. In: Proceedings of 34th IEEE COMPSAC, pp. 208–217 (2010)Vos, T.E.J., Marín, B., Escalona, M.J., Marchetto, A.: A methodological framework for evaluating software testing techniques and tools. In: 12th International Conference on Quality Software, Xi’an, China, 27–29 August 2012, pp. 230–239 (2012)Vos, T.E.J.: Evolutionary testing for complex systems. ERCIM News 2009(78) (2009)Vos, T.E.J.: Continuous evolutionary automated testing for the future internet. ERCIM News 2010(82), 50–51 (2010)Nguyen, C., Mendelson, B., Citron, D., Shehory, O., Vos, T., Condori-Fernandez, N.: Evaluating the fittest automated testing tools: an industrial case study. In: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 332–339 (2013)Nguyen, C., Tonella, P., Vos, T., Condori, N., Mendelson, B., Citron, D., Shehory, O.: Test prioritization based on change sensitivity: an industrial case study. Technical report UU-CS-2014-012, Utrecht University (2014)Shehory, O., Citron, D., Kruse, P.M., Fernandez, N.C., Vos, T.E.J., Mendelson, B.: Assessing the applicability of a combinatorial testing tool within an industrial environment. In: Proceedings of the 11th Workshop on Experimental Software Engineering (ESELAW 2014), CiBSE (2014)http://pic.dhe.ibm.com/infocenter/director/pubs/index.jsp?topic=%2Fcom.ibm.director.vim.helps.doc%2Ffsd0_vim_main.htmlBrosse, E., Bagnato, A., Vos, T., Condori-Fernandez, N.: Evaluating the FITTEST automated testing tools in SOFTEAM: an industrial case study. Technical report UU-CS-2014-009, Utrecht University (2014)Kruse, P., Condori-Fernandez, N., Vos, T., Bagnato, A., Brosse, E.: Combinatorial testing tool learnability in an industrial environment. In: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 304–312 (2013)Condori-Fernández, N., Vos, T., Kruse, P., Brosse, E., Bagnato, A.: Analyzing the applicability of a combinatorial testing tool in an industrial environment. Technical report UU-CS-2014-008, Utrecht University (2014)Bauersfeld, S., Condori-Fernandez, N., Vos, T., Brosse, E.: Evaluating rogue user an industrial case study at softeam. Technical report UU-CS-2014-010, Utrecht University (2014)Puoskari, E., Vos, T.E.J., Condori-Fernandez, N., Kruse, P.M.: Evaluating applicability of combinatorial testing in an industrial environment: a case study. In: Proceedings of the JAMAICA, pp. 7–12. ACM (2013)Bauersfeld, S., de Rojas, A., Vos, T.: Evaluating rogue user testing in industry: an experience report. Technical report UU-CS-2014-011, Utrecht University (2014)Zeller, A.: Isolating cause-effect chains from computer programs. In: 10th ACM SIGSOFT symposium on Foundations of Software Engineering (FSE), pp. 1–10 (2002)Elyasov, A., Prasetya, I., Hage, J., Nikas, A.: Reduce first, debug later. In: Proceedings of ICSE 2014 Workshops - 9th International Workshop on Automation of Software Test (AST 2014). ACM-IEEE, Washington, DC (2014)Naish, L., Lee, H.J., Ramamohanarao, K.: A model for spectra-based software diagnosis. ACM Trans. Softw. Eng. Methodol 20(3), 11:1–11:32 (2011)Prasetya, I.S.W.B., Sturala, A., Middelkoop, A., Hage, J., Elyasov, A.: Compact traceable logging. In: 5th International Conference on Advances in System Testing and Validation (VALID) (2013)Tonella, P., Marchetto, A., Nguyen, C.D., Jia, Y., Lakhotia, K., Harman, M.: Finding the optimal balance between over and under approximation of models inferred from execution logs. In: Proceedings of the Fifth IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 21–30 (2012
    corecore