6,071 research outputs found

    Automated Test Input Generation for Android: Are We There Yet?

    Full text link
    Mobile applications, often simply called "apps", are increasingly widespread, and we use them daily to perform a number of activities. Like all software, apps must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: code coverage, ability to detect faults, ability to work on multiple platforms, and ease of use. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android

    Automatically Discovering, Reporting and Reproducing Android Application Crashes

    Full text link
    Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPE's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPE's reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16), Chicago, IL, April 10-15, 2016, pp. 33-4

    Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool

    Full text link
    [EN] Traversal-based automated software testing involves testing an application via its graphical user interface (GUI) and thereby taking the user's point of view and executing actions in a human-like manner. These actions are decided on the fly, as the software under test (SUT) is being run, as opposed to being set up in the form of a sequence prior to the testing, a sequence that is then used to exercise the SUT. In practice, random choice is commonly used to decide which action to execute at each state (a procedure commonly referred to as monkey testing), but a number of alternative mechanisms have also been proposed in the literature. Here we propose using genetic programming (GP) to evolve such an action selection strategy, defined as a list of IF-THEN rules. Genetic programming has proved to be suited for evolving all sorts of programs, and rules in particular, provided adequate primitives (functions and terminals) are defined. These primitives must aim to extract the most relevant information from the SUT and the dynamics of the testing process. We introduce a number of such primitives suited to the problem at hand and evaluate their usefulness based on various metrics. We carry out experiments and compare the results with those obtained by random selection and also by Q-learning, a reinforcement learning technique. Three applications are used as Software Under Test (SUT) in the experiments. The analysis shows the potential of GP to evolve action selection strategies.Esparcia Alcázar, AI.; Almenar-Pedrós, F.; Vos, TE.; Rueda Molina, U. (2018). Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool. Memetic Computing. 10(3):257-265. https://doi.org/10.1007/s12293-018-0263-8S257265103Aho P, Menz N, Rty T (2013) Dynamic reverse engineering of GUI models for testing. In: Proceedings of 2013 international conference on control, decision and information technologies (CoDIT’13)Aho P, Oliveira R, Algroth E, Vos T (2016) Evolution of automated testing of software systems through graphical user interface. In: Procs. of the 1st international conference on advances in computation, communications and services (ACCSE 2016), Valencia, pp 16–21Alegroth E, Feldt R, Ryrholm L (2014) Visual GUI testing in practice: challenges, problems and limitations. Empir Softw Eng 20:694–744. https://doi.org/10.1007/s10664-013-9293-5Barr ET, Harman M, McMinn P, Shahbaz M, Yoo S (2015) The oracle problem in software testing: a survey. IEEE Trans Softw Eng 41(5):507–525Bauersfeld S, Vos TEJ (2012) A reinforcement learning approach to automated GUI robustness testing. In: Fast abstracts of the 4th symposium on search-based software engineering (SSBSE 2012), pp 7–12Bauersfeld S, de Rojas A, Vos T (2014) Evaluating rogue user testing in industry: an experience report. In: 2014 IEEE eighth international conference on research challenges in information science (RCIS), pp 1–10. https://doi.org/10.1109/RCIS.2014.6861051Bauersfeld S, Vos TEJ, Condori-Fernández N, Bagnato A, Brosse E (2014) Evaluating the TESTAR tool in an industrial case study. In: 2014 ACM-IEEE international symposium on empirical software engineering and measurement, ESEM 2014, Torino, Italy, September 18–19, 2014, p 4Bauersfeld S, Wappler S, Wegener J (2011) A metaheuristic approach to test sequence generation for applications with a GUI. In: Cohen MB, Ó Cinnéide M (eds) Search based software engineering: third international symposium, SSBSE 2011, Szeged, Hungary, September 10-12, 2011. Proceedings. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 173–187Brameier MF, Banzhaf W (2010) Linear genetic programming, 1st edn. Springer, New YorkChaudhary N, Sangwan O (2016) Metrics for event driven software. Int J Adv Comput Sci Appl 7(1):85–89Esparcia-Alcázar AI, Almenar F, Martínez M, Rueda U, Vos TE (2016) Q-learning strategies for action selection in the TESTAR automated testing tool. In: Proceedings of META 2016 6th international conference on metaheuristics and nature inspired computing, pp 174–180Esparcia-Alcázar AI, Almenar F, Rueda U, Vos TEJ (2017) Evolving rules for action selection in automated testing via genetic programming–a first approach. In: Squillero G, Sim K (eds) Applications of evolutionary computation: 20th European conference, evoapplications 2017, Amsterdam, The Netherlands, April 19–21, 2017, Proceedings, part II. Springer, pp 82–95. https://doi.org/10.1007/978-3-319-55792-2_6Esparcia-Alcázar AI, Moravec J (2013) Fitness approximation for bot evolution in genetic programming. Soft Comput 17(8):1479–1487. https://doi.org/10.1007/s00500-012-0965-7He W, Zhao R, Zhu Q (2015) Integrating evolutionary testing with reinforcement learning for automated test generation of object-oriented software. Chin J Electron 24(1):38–45Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, CambridgeLehman J, Stanley KO (2011) Novelty search and the problem with objectives. In: Riolo R, Vladislavleva E, Moore JH (eds) Genetic programming theory and practice IX, genetic and evolutionary computation. Springer, New York, pp 37–56Memon AM, Soffa ML, Pollack ME (2001) Coverage criteria for GUI testing. In: Proceedings of ESEC/FSE 2001, pp 256–267Rueda U, Vos TEJ, Almenar F, Martínez MO, Esparcia-Alcázar AI (2015) TESTAR: from academic prototype towards an industry-ready tool for automated testing at the user interface level. In: Canos JH, Gonzalez Harbour M (eds) Actas de las XX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2015), pp 236–245Seesing A, Gross HG (2006) A genetic programming approach to automated test generation for object-oriented software. Int Trans Syst Sci Appl 1(2):127–134Vos TE, Kruse PM, Condori-Fernández N, Bauersfeld S, Wegener J (2015) TESTAR: tool support for test automation at the user interface level. Int J Inf Syst Model Des 6(3):46–83. https://doi.org/10.4018/IJISMD.2015070103Wappler S, Wegener J (2006) Evolutionary unit testing of object-oriented software using strongly-typed genetic programming. In: Proceedings of the 8th annual conference on genetic and evolutionary computation, GECCO’06. ACM, New York, NY, USA, pp 1925–1932. URL https://doi.org/10.1145/1143997.1144317Watkins C (1989) Learning from delayed rewards. Ph.D. Thesis. Cambridge Universit

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Sapienz: Multi-objective automated testing for android applications

    Get PDF
    We introduce Sapienz, an approach to Android testing that uses multi-objective search-based testing to automatically explore and optimise test sequences, minimising length, while simultaneously maximising coverage and fault revelation. Sapienz combines random fuzzing, systematic and search-based exploration, exploiting seeding and multi-level instrumentation. Sapienz significantly outperforms (with large effect size) both the state-of-the-art technique Dynodroid and the widely-used tool, Android Monkey, in 7/10 experiments for coverage, 7/10 for fault detection and 10/10 for fault-revealing sequence length. When applied to the top 1, 000 Google Play apps, Sapienz found 558 unique, previously unknown crashes. So far we have managed to make contact with the developers of 27 crashing apps. Of these, 14 have confirmed that the crashes are caused by real faults. Of those 14, six already have developer-confirmed fixes

    Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

    Full text link
    Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps poses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases. In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcomings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).Comment: 12 pages, accepted to the Proceedings of 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME'17

    Finite State Testing of Graphical User Interface using Genetic Algorithm

    Get PDF
    Graphical user interfaces are the key components of any software. Nowadays, the popularity of the software depends upon how easily the user can interact with the system. However, as the system becomes complex, this interaction is also complicated with many states. The testing of graphical user interfaces is an important phase of modern software. The testing of GUI is possible only by interacting with the system, which may be a time-consuming process and is generally automated based on the test suite. The test suite generation proposed in this paper is based on the genetic algorithm in which various test cases are generated heuristically. For performance validation of the proposed approach, the same has been compared with a variant of PSO, and it found that GA is slightly better in comparison to the PSO
    • …
    corecore