Leveraging Generated Tests

Abstract

The main goal of automated test generation is to improve the reliability of a program by exposing faults to developers. To this end, testing should cover the largest possible portion of the program given a test budget (i.e., time and resources) as frequently as possible. Coverage of a program entity in testing increases our confidence in the correctness of that entity. Generating various tests to cover a program entity is a particularly hard problem to solve for large software systems because the test inputs are complex and they often exhibit sophisticated feature interactions. As a result, current test generation techniques, such as symbolic execution or search-based testing, do not scale well to complex, large-scale systems. This dissertation presents a test generation technique which aims to increase the frequency of coverage in large, complex software systems. It leverages the information of existing test cases to direct the automated testing. We show the results of the application of this technique to some large systems such as GCC compiler ( 850K Lines of code), and Mozillas JavaScript engine ( 120K lines of code). It increases the frequency of coverage upto the factor of 9x, compared to the state-of-the-art technique. It also proposes non-adequate test-case reduction for reducing the size of test cases by cov-erage and mutant detection criteria. C%-coverage test reduction technique reduces a test case while preseving at least C% of coverage in the original test case. N-mutant test reduction tech-nique reduces a test cases while preserving detection of N mutants of the original test case. We evaluate the effectiveness of these test reduction techniques on different attributes of test cases. This research suggest that the generated test cases should be treated as first-class artifacts in the software development and they can be leveraged for interesting testing tasks

    Similar works