7 research outputs found
Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool
[EN] Traversal-based automated software testing involves testing an application via its graphical user interface (GUI) and thereby taking the user's point of view and executing actions in a human-like manner. These actions are decided on the fly, as the software under test (SUT) is being run, as opposed to being set up in the form of a sequence prior to the testing, a sequence that is then used to exercise the SUT. In practice, random choice is commonly used to decide which action to execute at each state (a procedure commonly referred to as monkey testing), but a number of alternative mechanisms have also been proposed in the literature. Here we propose using genetic programming (GP) to evolve such an action selection strategy, defined as a list of IF-THEN rules. Genetic programming has proved to be suited for evolving all sorts of programs, and rules in particular, provided adequate primitives (functions and terminals) are defined. These primitives must aim to extract the most relevant information from the SUT and the dynamics of the testing process. We introduce a number of such primitives suited to the problem at hand and evaluate their usefulness based on various metrics. We carry out experiments and compare the results with those obtained by random selection and also by Q-learning, a reinforcement learning technique. Three applications are used as Software Under Test (SUT) in the experiments. The analysis shows the potential of GP to evolve action selection strategies.Esparcia Alcázar, AI.; Almenar-PedrĂłs, F.; Vos, TE.; Rueda Molina, U. (2018). Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool. Memetic Computing. 10(3):257-265. https://doi.org/10.1007/s12293-018-0263-8S257265103Aho P, Menz N, Rty T (2013) Dynamic reverse engineering of GUI models for testing. In: Proceedings of 2013 international conference on control, decision and information technologies (CoDIT’13)Aho P, Oliveira R, Algroth E, Vos T (2016) Evolution of automated testing of software systems through graphical user interface. In: Procs. of the 1st international conference on advances in computation, communications and services (ACCSE 2016), Valencia, pp 16–21Alegroth E, Feldt R, Ryrholm L (2014) Visual GUI testing in practice: challenges, problems and limitations. Empir Softw Eng 20:694–744. https://doi.org/10.1007/s10664-013-9293-5Barr ET, Harman M, McMinn P, Shahbaz M, Yoo S (2015) The oracle problem in software testing: a survey. IEEE Trans Softw Eng 41(5):507–525Bauersfeld S, Vos TEJ (2012) A reinforcement learning approach to automated GUI robustness testing. In: Fast abstracts of the 4th symposium on search-based software engineering (SSBSE 2012), pp 7–12Bauersfeld S, de Rojas A, Vos T (2014) Evaluating rogue user testing in industry: an experience report. In: 2014 IEEE eighth international conference on research challenges in information science (RCIS), pp 1–10. https://doi.org/10.1109/RCIS.2014.6861051Bauersfeld S, Vos TEJ, Condori-Fernández N, Bagnato A, Brosse E (2014) Evaluating the TESTAR tool in an industrial case study. In: 2014 ACM-IEEE international symposium on empirical software engineering and measurement, ESEM 2014, Torino, Italy, September 18–19, 2014, p 4Bauersfeld S, Wappler S, Wegener J (2011) A metaheuristic approach to test sequence generation for applications with a GUI. In: Cohen MB, Ă“ CinnĂ©ide M (eds) Search based software engineering: third international symposium, SSBSE 2011, Szeged, Hungary, September 10-12, 2011. Proceedings. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 173–187Brameier MF, Banzhaf W (2010) Linear genetic programming, 1st edn. Springer, New YorkChaudhary N, Sangwan O (2016) Metrics for event driven software. Int J Adv Comput Sci Appl 7(1):85–89Esparcia-Alcázar AI, Almenar F, MartĂnez M, Rueda U, Vos TE (2016) Q-learning strategies for action selection in the TESTAR automated testing tool. In: Proceedings of META 2016 6th international conference on metaheuristics and nature inspired computing, pp 174–180Esparcia-Alcázar AI, Almenar F, Rueda U, Vos TEJ (2017) Evolving rules for action selection in automated testing via genetic programming–a first approach. In: Squillero G, Sim K (eds) Applications of evolutionary computation: 20th European conference, evoapplications 2017, Amsterdam, The Netherlands, April 19–21, 2017, Proceedings, part II. Springer, pp 82–95. https://doi.org/10.1007/978-3-319-55792-2_6Esparcia-Alcázar AI, Moravec J (2013) Fitness approximation for bot evolution in genetic programming. Soft Comput 17(8):1479–1487. https://doi.org/10.1007/s00500-012-0965-7He W, Zhao R, Zhu Q (2015) Integrating evolutionary testing with reinforcement learning for automated test generation of object-oriented software. Chin J Electron 24(1):38–45Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, CambridgeLehman J, Stanley KO (2011) Novelty search and the problem with objectives. In: Riolo R, Vladislavleva E, Moore JH (eds) Genetic programming theory and practice IX, genetic and evolutionary computation. Springer, New York, pp 37–56Memon AM, Soffa ML, Pollack ME (2001) Coverage criteria for GUI testing. In: Proceedings of ESEC/FSE 2001, pp 256–267Rueda U, Vos TEJ, Almenar F, MartĂnez MO, Esparcia-Alcázar AI (2015) TESTAR: from academic prototype towards an industry-ready tool for automated testing at the user interface level. In: Canos JH, Gonzalez Harbour M (eds) Actas de las XX Jornadas de IngenierĂa del Software y Bases de Datos (JISBD 2015), pp 236–245Seesing A, Gross HG (2006) A genetic programming approach to automated test generation for object-oriented software. Int Trans Syst Sci Appl 1(2):127–134Vos TE, Kruse PM, Condori-Fernández N, Bauersfeld S, Wegener J (2015) TESTAR: tool support for test automation at the user interface level. Int J Inf Syst Model Des 6(3):46–83. https://doi.org/10.4018/IJISMD.2015070103Wappler S, Wegener J (2006) Evolutionary unit testing of object-oriented software using strongly-typed genetic programming. In: Proceedings of the 8th annual conference on genetic and evolutionary computation, GECCO’06. ACM, New York, NY, USA, pp 1925–1932. URL https://doi.org/10.1145/1143997.1144317Watkins C (1989) Learning from delayed rewards. Ph.D. Thesis. Cambridge Universit
Reconocimiento de widgets automático para aplicaciones Java/Swing en TESTAR
[ES] El grupo de Software Testing and Quality (STaQ) del centro de investigaciĂłn PROS de la
Universidad Politécnica de Valencia (UPV) ha desarrollado una herramienta, denominada
TESTAR (www.testar.org) para el testing automatizada a nivel de Interfaz de Usuario (IU).
TESTAR genera y ejecuta casos de prueba automáticamente basado en un modelo de árbol
derivado automáticamente desde la IU de la aplicación bajo prueba. Este árbol es construido con
ayuda del API de accesibilidad del sistema operativo que ayuda a reconocer todos los elementos
gráficos de la IU (widgets). La herramienta no es del tipo capturar/reproducir ni usa
reconocimientos de formas. Las empresas que han desplegado la herramienta son muy
optimistas y lo ven como un cambio de paradigma del testing a largo plazo, pues tiene potencial
para resolver muchos de los problemas de las herramientas existentes.
En este proyecto el objetivo es extender e implementar la capacidad de reconocimiento de
widgets (elementos gráficos de Interfaz de Usuario) de la herramienta TESTAR para
aplicaciones Java en sistemas operativos Microsoft Windows. Esta herramienta facilita el testeo
automático de aplicaciones software desde su Interfaz de Usuario (IU), pero actualmente se ha
identificado una limitaciĂłn en el reconocimiento de widgets cuando la tecnologĂa Java utilizada
es Swing, funcionando sin problemas para AWT y SWT.
TESTAR se sustenta en tecnologĂas de accesibilidad que exponen los widgets de las
aplicaciones software. El carácter “lightweight” en la implementación de Swing provoca que los
widgets Swing no sean identificados por tecnologĂas de accesiblidad. Para apoyar la
accesbilidad de aplicaciones Swing existe un puente denominado Java Access Bridge que
expone el API de accesibilidad de Java en una librerĂa dinámica (DLL) de Windows:
http://www.oracle.com/technetwork/articles/javase/index-jsp-136191.html
Por tanto, la labor del proyecto será:
• Estudiar el puente Java Access Bridge para facilitar el reconocimiento de widgets de
manera automática en aplicaciones Java/Swing existentes.
• Implementar un plug-in para TESTAR, que dote a la herramienta de reconocimiento de
widgets en Swing que permitirá automatizar la prueba de software bajo tecnologĂa Swing,
además del soporte actual a las tecnologĂas AWT y SWT.
• Evaluar la capacidad de TESTAR en Java/Swing con dos casos prácticos con
aplicaciones en empresas. (En este momento EVERIS y Clearone son empresas que han
mostrado interés en tener disponible esta capacidad en TESTAR).
• Documentar los resultados[EN] The Software Testing and Quality (STAQ) group of the PROS research center at the
Polytechnic University of Valencia (UPV) has developed a tool, called TESTAR
(www.testar.org) for automated testing at the user interface level (UI) . TESTAR generates and
executes test cases automatically based on a tree model automatically derived from the UI of the
application under test. This tree is built using the Accessibility API of the operating system that
helps to recognize all graphical UI elements (widgets). The tool is not capture / replay nor uses
image recognition. Companies that have deployed the tool are very positive and see it as a
paradigm shift for testing. They believe that TESTAR has the potential to solve many problems
with existing tools.
In this project the aim is to extend and implement the recognizability of widgets (graphical
elements of the User Interface) of the TESTAR tool for Java applications in Microsoft Windows
operating systems. TESTAR has a limitation regarding the recognition of widgets when the Java
technology Swing is used (it runs smoothly for AWT and SWT).
TESTAR is based on accessibility technologies that expose widgets of the software
application under test. The "lightweight" character of Swing makes that some Swing elements
are not correctly identified by accessibility technologies . To support the application
accesbilidad for Swing there is a bridge called Java Access Bridge exposes the Java
Accessibility API in a dynamic link library (DLL) for Windows:
http://www.oracle.com/technetwork/articles/javase/index-jsp-136191.html
Therefore, the work of the project will be:
• Study the Java Access Bridge bridge to facilitate recognition of widgets automatically in
Java / Swing applications.
• Implement a plug-in for TESTAR to enrich the tool with recognition of Swing widgets, in
addition to the current support for AWT and SWT technologies.
• Assess the capacity of TESTAR in Java / Swing with two case studies with industrial
applications. (Currently EVERIS and Clearone are companies that have shown interest in
having this capacity available in TESTAR).
• Document the resultsPastor Ricos, F. (2017). Reconocimiento de widgets automático para aplicaciones Java/Swing en TESTAR. http://hdl.handle.net/10251/88838.TFG
Testing Basado en la Busqueda en TESTAR
[ES] Las interfaces gráficas de usuario (IGU) constituyen un punto vital para testear una
aplicación. Para ello se han desarrollado diversas herramientas automáticas, que, en su
mayorĂa, utilizan algoritmos en los que las acciones a ejecutar en cada paso se deciden
aleatoriamente. Esto es eficaz en aquellas aplicaciones inmaduras que han sido poco
testeadas y presentan muchos errores. Dotar de “inteligencia” a los mecanismos de
selecciĂłn de acciones constituirĂa un importante avance para conseguir una mayor
implantaciĂłn de las herramientas de testeo automatizado, lo que redundarĂa en un
incremento de la calidad del software. Éste es precisamente el objetivo de este trabajo.
Para conseguirlo, se ha utilizado un enfoque basado en bĂşsqueda (o search-based)
que convierte el testeo en un problema de optimizaciĂłn. Nuestro punto de partida es la
herramienta TESTAR, desarrollada en el ámbito del proyecto de investigación europeo
FITTEST. Se han utilizado y evaluado dos métodos: Q-learning y programación
genética. Otro resultado importante son la definición de las métricas apropiadas para
optimizar; en este trabajo se han introducido cuatro nuevas métricas.
La combinación de los algoritmos search-based con estas métricas ha permitido
obtener resultados muy prometedores, que redundarán en la mejora de TESTAR.[EN] Graphic User Interfaces (GUI) are a main entry point to test an application.
Different automated tools to test at the GUI level exist. Those that automate the design
of test cases usually use random algorithms to choose the action that should be
executed next in the test sequence. This technique is quite useful in applications that
are immature, have been poorly tested or present many errors. To give more
“intelligence” to this action selection mechanism, in this work we suppose a great
development in the implantation of the automated testing tools. This improvement will
result in better testing.
To achieve this, we use search-based techniques to transform the problem into an
optimization one. Our starting point is the tool called TESTAR, a tool developed during
an EU research Project called FITTEST. Two different methods have been implemented
and evaluated: Q-learning and genetic programming. Another results of our work is the
definition of metrics that guide the optimization properly. Four new and different
metrics have been introduced.
The combination between metrics and search-based algorithms has been assessed and
promising results have been obtained that will increase TESTAR capabilities.Almenar PedrĂłs, F. (2016). Testing Basado en la Busqueda en TESTAR. http://hdl.handle.net/10251/71699.TFG
TESTAR para testing IoT
[ES] El número de dispositivos conectados a internet ha aumentado en los últimos años.
Con ello, la llamada Internet de las Cosas (IoT, por sus siglas en inglés) se está
convirtiendo en una realidad. Se trata, por tanto, de un tema que dispone del potencial
necesario para cambiar el modo en el que las personas vivimos e incluso trabajamos.
Para poder aprovechar las ventajas que la IoT puede aportarnos es necesario asegurar
la calidad de los dispositivos masivamente interconectados. Aplicar testeo automático
se presenta como un medio para satisfacer dichas necesidades. Sin embargo, conlleva
varias dificultades como la falta de estándares y las limitaciones en recursos como
baterĂa y memoria.
En este trabajo se parte de una herramienta de testeo automático a nivel de interfaz de
usuario que ha sido aplicada con Ă©xito en diversos entornos industriales. A partir de
ella, se desarrolla una nueva aplicaciĂłn que, manteniendo su filosofĂa y aproximaciĂłn,
es aplicable al mundo de la IoT. La herramienta desarrollada es evaluada mediante el
testeo de una vivienda inteligente y se presentan los resultados obtenidos.[EN] As the number of devices connected to the Internet is increasing, the so-called
Internet of Things (IoT) is becoming a reality. It even has the required potential to
change both the way we live and the way we work.
In order to take advantage of the benefits that the IoT can bring us, ensuring the quality
of massively interconnected devices becomes a pressing necessity. A means of satisfying
this need would be automated testing of IoT devices. However, this presents many
difficulties such as the lack of standards and limitations in battery and memory.
In this work we start from an automated testing tool at the user interface level that has
already been successfully applied in several industrial cases. Maintaining its philosophy
and approach, a new tool is developed which is applicable to the IoT environment. The
tool is evaluated by testing a smart home and the results are presented.MartĂnez Murillo, MO. (2016). TESTAR para testing IoT. http://hdl.handle.net/10251/71683.TFG
Evaluating Software Testing Techniques: A Systematic Mapping Study
Software testing techniques are crucial for detecting faults in software and reducing the risk of using it. As such, it is important that we have a good understanding of how to evaluate these techniques for their efficiency, scalability, applicability, and effectiveness at finding faults. This thesis enhances our understanding of testing technique evaluations by providing an overview of the state of the art in research. To accomplish this we utilize a systematic mapping study; structuring the field and identifying research gaps and publication trends. We then present a small case study demonstrating how our mapping study can be used to assist researchers in evaluating their own software testing techniques. We find that a majority of evaluations are empirical evaluations in the form of case studies and experiments, most of these evaluations are of low quality based on proper methodology guidelines, and that relatively few papers in the field discuss how testing techniques should be evaluated
Evaluating the TESTAR tool in an industrial case study
[Context] Automated test case design and execution at the GUI level of applications is not a fact in industrial practice. Tests are still mainly designed and executed manually. In previous work we have described TESTAR, a tool which allows to set-up fully automatic testing at the GUI level of applications to find severe faults such as crashes or non-responsiveness. [Method] This paper aims at the evaluation of TESTAR with an industrial case study. The case study was conducted at SOFTEAM, a French software company, while testing their Modelio SaaS system, a cloud-based system to manage virtual machines that run their popular graphical UML editor Modelio. [Goal] The goal of the study was to evaluate how the tool would perform within the context of SOFTEAM and on their software application. On the other hand, we were interested to see how easy or difficult it is to learn and implant our academic prototype within an industrial setting. [Results] The effectiveness and efficiency of the automated tests generated with TESTAR can definitely compete with that of the manual test suite. [Conclusions] The training materials as well as the user and installation manual of TESTAR need to be improved using the feedback received during the study. Finally, the need to program Java-code to create sophisticated oracles for testing created some initial problems and some resistance. However, it became clear that this could be solved by explaining the need for these oracles and compare them to the alternative of more expensive and complex human oracles. The need to raise consciousness that automated testing means programming solved most of the initial problems. Copyright 2014 ACM