3 research outputs found
Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
Testing in Continuous Integration (CI) involves test case prioritization,
selection, and execution at each cycle. Selecting the most promising test cases
to detect bugs is hard if there are uncertainties on the impact of committed
code changes or, if traceability links between code and tests are not
available. This paper introduces Retecs, a new method for automatically
learning test case selection and prioritization in CI with the goal to minimize
the round-trip time between code commits and developer feedback on failed test
cases. The Retecs method uses reinforcement learning to select and prioritize
test cases according to their duration, previous last execution and failure
history. In a constantly changing environment, where new test cases are created
and obsolete test cases are deleted, the Retecs method learns to prioritize
error-prone test cases higher under guidance of a reward function and by
observing previous CI cycles. By applying Retecs on data extracted from three
industrial case studies, we show for the first time that reinforcement learning
enables fruitful automatic adaptive test case selection and prioritization in
CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017).
Reinforcement Learning for Automatic Test Case Prioritization and Selection
in Continuous Integration. In Proceedings of 26th International Symposium on
Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC
Test case prioritization using test case diversification and fault-proneness estimations
Context: Regression testing activities greatly reduce the risk of faulty
software release. However, the size of the test suites grows throughout the
development process, resulting in time-consuming execution of the test suite
and delayed feedback to the software development team. This has urged the need
for approaches such as test case prioritization (TCP) and test-suite reduction
to reach better results in case of limited resources. In this regard, proposing
approaches that use auxiliary sources of data such as bug history can be
interesting.
Objective: Our aim is to propose an approach for TCP that takes into account
test case coverage data, bug history, and test case diversification. To
evaluate this approach we study its performance on real-world open-source
projects.
Method: The bug history is used to estimate the fault-proneness of source
code areas. The diversification of test cases is preserved by incorporating
fault-proneness on a clustering-based approach scheme.
Results: The proposed methods are evaluated on datasets collected from the
development history of five real-world projects including 357 versions in
total. The experiments show that the proposed methods are superior to
coverage-based TCP methods.
Conclusion: The proposed approach shows that improvement of coverage-based
and fault-proneness based methods is possible by using a combination of
diversification and fault-proneness incorporation
Search-Based Information Systems Migration: Case Studies on Refactoring Model Transformations
Information systems are built to last for decades; however, the reality suggests otherwise. Companies are often pushed to modernize their systems to reduce costs, meet new policies, improve the security, or to be more competitive. Model-driven engineering (MDE) approaches are used in several successful projects to migrate systems. MDE raises the level of abstraction for complex systems by relying on models as first-class entities. These models are maintained and transformed using model transformations (MT), which are expressed by means of transformation rules to transform models from source to target meta-models. The migration process for information systems may take years for large systems. Thus, many changes are going to be introduced to the transformations to reflect the new business requirements, fix bugs, or to meet the updated metamodels. Therefore, the quality of MT should be continually checked and improved during the evolution process to avoid future technical debts. Most MT programs are written as one large module due to the lack of refactoring/modularization and regression testing tools support. In object-oriented systems, composition and modularization are used to tackle the issues of maintainability and testability. Moreover, refactoring is used to improve the non-functional attributes of the software, making it easier and faster for developers to work and manipulate the code. Thus, we proposed an intelligent computational search approach to automatically modularize MT. Furthermore, we took inspiration from a well-defined quality assessment model for object-oriented design to propose a quality assessment model for MT in particular. The results showed a 45% improvement in the developer’s speed to detect or fix bugs, and developers made 40% less errors when performing a task with the optimized version. Since refactoring operations changes the transformation, it is important to apply regression testing to check their correctness and robustness. Thus, we proposed a multi-objective test case selection technique to find the best trade-off between coverage and computational cost. Results showed a drastic speed-up of the testing process while still showing a good testing performance. The survey with practitioners highlighted the need of such maintenance and evolution framework to improve the quality and efficiency of the existing migration process.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/149153/1/Bader Alkhazi Final Dissertation.pdfDescription of Bader Alkhazi Final Dissertation.pdf : Restricted to UM users only