1,856 research outputs found
Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
Testing in Continuous Integration (CI) involves test case prioritization,
selection, and execution at each cycle. Selecting the most promising test cases
to detect bugs is hard if there are uncertainties on the impact of committed
code changes or, if traceability links between code and tests are not
available. This paper introduces Retecs, a new method for automatically
learning test case selection and prioritization in CI with the goal to minimize
the round-trip time between code commits and developer feedback on failed test
cases. The Retecs method uses reinforcement learning to select and prioritize
test cases according to their duration, previous last execution and failure
history. In a constantly changing environment, where new test cases are created
and obsolete test cases are deleted, the Retecs method learns to prioritize
error-prone test cases higher under guidance of a reward function and by
observing previous CI cycles. By applying Retecs on data extracted from three
industrial case studies, we show for the first time that reinforcement learning
enables fruitful automatic adaptive test case selection and prioritization in
CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017).
Reinforcement Learning for Automatic Test Case Prioritization and Selection
in Continuous Integration. In Proceedings of 26th International Symposium on
Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC
Prioritization of combinatorial test cases by incremental interaction coverage
Combinatorial testing is a well-recognized testing method, and has been widely applied in practice. To facilitate analysis, a common approach is to assume that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, the order of executing the test cases is critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number on uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases
Recommended from our members
The Effectiveness of <i>t</i>-Way Test Data Generation
Modern society is increasingly dependent on the correct functioning of software and increasingly so in areas that are considered safety related or safety critical. Therefore, there is an increasing need to be able to verify and validate that the software is in fact correct and will perform its intended function. Many approaches to this problem have been proposed; however, none seems likely to supplant the role of testing in the near future.
If we accept that there is, and will be, a continuing need to be able to test software then the question becomes one of how can this be done effectively, both in terms of ability to detect errors and in terms of cost. One avenue of research that offers prospects of improving both of these aspects is the automatic generation of test data.
There has recently been a large amount of work conducted in this area. One particularly promising direction has been the application of ideas from the field of experimental design and in particular, the field of t-way adequate factorial designs.
The area however, is not without issues; there is evidence that the technique is capable of detecting errors but that evidence is not unequivocal. Moreover, as with almost all work in the area of automatic test generation, there has been very little comparative work comparing the technique with other test data generation techniques. Worse, there has been effectively no work done that compares any automatic test data generation technique with the effectiveness of tests generated by humans. Another major issue with the technique is the number of tests that applying the technique can result in. This implies that there is a need for an automated oracle if the technique is to be successfully applied. The flaw with this is of course that in most situations the oracle is the human that is conducting the tests, a point often ignored in testing research.
The work presented here addresses both of these points. To do this I have used a code base taken from an industrial engine control system that has an existing set of high quality unit tests developed by hand. To complement this, several other techniques for automatically generating test data have been applied, namely random testing, random experimental designs and a technique for generating single factor experiments. To address the issue of being able to compare the error detection ability of all of the sets of test vectors, rather than the usual effectiveness surrogates of code coverage I have used mutation analysis on the code base to directly measure the ability of each set of test vectors to discover common coding errors. The results presented here show that test data generation techniques based on t-way factorial designs are at least as effective as handgenerated tests and superior to random testing and the factor experimental technique.
The oracle problem associated with the factorial design techniques was addressed using a test set minimisation approach. The mutation tool monitored which vectors could “kill” which code mutants. After a subset of the test vectors had been run, the most effective vectors were retained and the rest discarded. Likewise, mutants that were killed were removed from further consideration and the process repeated. Experimental results show that this minimisation procedure is effective at reducing computational overhead and is capable of producing final sets of test vectors that are comparable in size with the sets of hand-generated tests and so amenable to final hand checking
Aggregate-strength interaction test suite prioritization
Combinatorial interaction testing is a widely used approach. In testing, it is often assumed that all combinatorial test cases have equal fault detection capability, however it has been shown that the execution order of an interaction test suite's test cases may be critical, especially when the testing resources are limited. To improve testing cost-effectiveness, test cases in the interaction test suite can be prioritized, and one of the best-known categories of prioritization approaches is based on “fixed-strength prioritization”, which prioritizes an interaction test suite by choosing new test cases which have the highest uncovered interaction coverage at a fixed strength (level of interaction among parameters). A drawback of these approaches, however, is that, when selecting each test case, they only consider a fixed strength, not multiple strengths. To overcome this, we propose a new “aggregate-strength prioritization”, to combine interaction coverage at different strengths. Experimental results show that in most cases our method performs better than the test-case-generation, reverse test-case-generation, and random prioritization techniques. The method also usually outperforms “fixed-strength prioritization”, while maintaining a similar time cost
Test & Evaluation Best Practices for Machine Learning-Enabled Systems
Machine learning (ML) - based software systems are rapidly gaining adoption
across various domains, making it increasingly essential to ensure they perform
as intended. This report presents best practices for the Test and Evaluation
(T&E) of ML-enabled software systems across its lifecycle. We categorize the
lifecycle of ML-enabled software systems into three stages: component,
integration and deployment, and post-deployment. At the component level, the
primary objective is to test and evaluate the ML model as a standalone
component. Next, in the integration and deployment stage, the goal is to
evaluate an integrated ML-enabled system consisting of both ML and non-ML
components. Finally, once the ML-enabled software system is deployed and
operationalized, the T&E objective is to ensure the system performs as
intended. Maintenance activities for ML-enabled software systems span the
lifecycle and involve maintaining various assets of ML-enabled software
systems.
Given its unique characteristics, the T&E of ML-enabled software systems is
challenging. While significant research has been reported on T&E at the
component level, limited work is reported on T&E in the remaining two stages.
Furthermore, in many cases, there is a lack of systematic T&E strategies
throughout the ML-enabled system's lifecycle. This leads practitioners to
resort to ad-hoc T&E practices, which can undermine user confidence in the
reliability of ML-enabled software systems. New systematic testing approaches,
adequacy measurements, and metrics are required to address the T&E challenges
across all stages of the ML-enabled system lifecycle
Feedback driven adaptive combinatorial testing
The configuration spaces of modern software systems are too large to test exhaustively. Combinatorial interaction testing (CIT) approaches, such as covering arrays, systematically sample the configuration space and test only the selected configurations. The basic justification for CIT approaches is that they can cost-effectively exercise all system behaviors caused by the settings of t or fewer options. We conjecture, however, that in practice many such behaviors are not actually tested because of masking effects – failures that perturb execution so as to prevent some behaviors from being exercised. In this work we present a feedback-driven, adaptive, combinatorial testing approach aimed at detecting and working around masking effects. At each iteration we detect potential masking effects, heuristically isolate their likely causes, and then generate new covering arrays that allow previously masked combinations to be tested in the subsequent iteration. We empirically assess the effectiveness of the proposed approach on two large widely used open source software systems. Our results suggest that masking effects do exist and that our approach provides a promising and efficient way to work around them
- …