8 research outputs found

    Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study

    Get PDF
    Work on automated test generation has produced several tools capable of generating test data which achieves high structural coverage over a program. In the absence of a specification, developers are expected to manually construct or verify the test oracle for each test input. Nevertheless, it is assumed that these generated tests ease the task of testing for the developer, as testing is reduced to checking the results of tests. While this assumption has persisted for decades, there has been no conclusive evidence to date confirming it. However, the limited adoption in industry indicates this assumption may not be correct, and calls into question the practical value of test generation tools. To investigate this issue, we performed two controlled experiments comparing a total of 97 subjects split between writing tests manually and writing tests with the aid of an automated unit test generation tool, EvoSuite. We found that, on one hand, tool support leads to clear improvements in commonly applied quality metrics such as code coverage (up to 300% increase). However, on the other hand, there was no measurable improvement in the number of bugs actually found by developers. Our results not only cast some doubt on how the research community evaluates test generation tools, but also point to improvements and future work necessary before automated test generation tools will be widely adopted by practitioners

    Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ and Jenkins

    Get PDF
    Different techniques to automatically generate unit tests for object oriented classes have been proposed, but how to integrate these tools into the daily activities of software development is a little investigated question. In this paper, we report on our experience in supporting industrial partners in introducing the EVOSUITE automated JUnit test generation tool in their software development processes. The first step consisted of providing a plugin to the Apache Maven build infrastructure. The move from a research-oriented point-and-click tool to an automated step of the build process has implications on how developers interact with the tool and generated tests, and therefore, we produced a plugin for the popular IntelliJ Integrated Development Environment (IDE). As build automation is a core component of Continuous Integration (CI), we provide a further plugin to the Jenkins CI system, which allows developers to monitor the results of EVOSUITE and integrate generated tests in their source tree. In this paper, we discuss the resulting architecture of the plugins, and the challenges arising when building such plugins. Although the plugins described are targeted for the EVOSUITE tool, they can be adapted and their architecture can be reused for other test generation tools as well

    A Fitness Function for Search-based Testing of Java Classes, which is Based on the States Reached by the Object under Test

    Get PDF
    Genetic Algorithms are among the most efficient search-based techniques to automatically generate unit test cases today. The search is guided by a fitness function which evaluates how close an individual is to satisfy a given coverage goal. There exists several coverage criteria but the default criterion today is branch coverage. Nevertheless achieving high or full branch coverage does not imply that the generated test suite has good quality. In object oriented programs the state of the object affects its behavior. Thereupon, test cases that put the object under test, in new states are of interest in the testing context. In this article we propose a new fitness function which takes into consideration three factors for evaluation: the approach level, the branch distance and the new states reached by a test case. The coverage targets are still the branches, but during the search, the state of the object under test evolves with the scope to produce individuals that discover interesting features of the class and as a consequence can discover errors. We implemented this fitness function in the eToc tool. In our experiments the usage of the proposed fitness function towards the original fitness function results in a relative increase of 15.6% in the achieved average mutation score with the cost of a relative increase of 12.6% in the average test suite size

    Branch coverage prediction in automated testing

    Get PDF
    This is the peer reviewed version which has been published in final form at [DOI]. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.Software testing is crucial in continuous integration (CI). Ideally, at every commit, all the test cases should be executed, and moreover, new test cases should be generated for the new source code. This is especially true in a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline. In this context, developers want to achieve a certain minimum level of coverage for every software build. However, executing all the test cases and, moreover, generating new ones for all the classes at every commit is not feasible. As a consequence, developers have to select which subset of classes has to be tested and/or targeted by testā€case generation. We argue that knowing a priori the branch coverage that can be achieved with testā€data generation tools can help developers into taking informed decision about those issues. In this paper, we investigate the possibility to use sourceā€code metrics to predict the coverage achieved by testā€data generation tools. We use four different categories of sourceā€code features and assess the prediction on a large data set involving more than 3'000 Java classes. We compare different machine learning algorithms and conduct a fineā€grained feature analysis aimed at investigating the factors that most impact the prediction accuracy. Moreover, we extend our investigation to four different search budgets. Our evaluation shows that the best model achieves an average 0.15 and 0.21 MAE on nested crossā€validation over the different budgets, respectively, on EVOSUITE and RANDOOP. Finally, the discussion of the results demonstrate the relevance of couplingā€related features for the prediction accuracy

    Comparing the effectiveness of automated test generation tools "EVOSUITE"ļæ½ and "Tpalus"ļæ½

    Get PDF
    University of Minnesota M.S. thesis. July 2015. Major: Computer Science. Advisor: Andrew Brooks. 1 computer file (PDF); vii, 71 pages.Automated testing has been evolving over the years and the main reason behind the growth of these tools is to reduce the manual effort in checking the correctness of any software. Writing test cases to check the correctness of software is very time consuming and requires a great deal of patience. A lot of time and effort used on writing manual test cases can be saved and in turn we can focus on improving the performance of the application. Statistics show that 50% of the total cost of software development is devoted to software testing, even more in the case of critical software. The growth of these automated test generation tools lead us to a big question of "How effective are these tools in checking the correctness of the application?"ļæ½ There are several challenges associated with developing automated test generation tools and currently there is no particular tool or metric to check the effectiveness of these automated test generation tools. In my thesis, I aim to measure the effectiveness of two automated test generation tools. The two automated tools on which I have experimented on are Tpalus and EVOSUITE. Tpalus and EVOSUITE are capable of generating test cases for any program written in Java. They are specifically designed to work on Java. Several metrics have to be considered in measuring the effectiveness of a tool. I use the results obtained from these tools on several open source subjects to evaluate both the tools. The metrics that were chosen in comparing these tools include code coverage, mutation scores, and size of the test suite. Code coverage tells us how well the source code is covered by the test cases. A better test suite generally aims to cover most of the source code to consider each and every statement as a part of testing. A mutation score is an indication of the test suite detecting and killing mutants. In this case, a mutant is a new version of a program that is created by making a small syntactic change to the original program. The higher mutation score, the higher the number of mutants detected and killed. Results obtained during the experiment include branch coverage, line coverage, raw kill score and normalized kill score. These results help us to decide how effective these tools are when testing critical software

    Exploring means to facilitate software debugging

    Get PDF
    In this thesis, several aspects of software debugging from automated crash reproduction to bug report analysis and use of contracts have been studied.Algorithms and the Foundations of Software technolog
    corecore