16,682 research outputs found
Test Case Selection and Prioritization Using Machine Learning: A Systematic Literature Review
Regression testing is an essential activity to assure that software code
changes do not adversely affect existing functionalities. With the wide
adoption of Continuous Integration (CI) in software projects, which increases
the frequency of running software builds, running all tests can be
time-consuming and resource-intensive. To alleviate that problem, Test case
Selection and Prioritization (TSP) techniques have been proposed to improve
regression testing by selecting and prioritizing test cases in order to provide
early feedback to developers. In recent years, researchers have relied on
Machine Learning (ML) techniques to achieve effective TSP (ML-based TSP). Such
techniques help combine information about test cases, from partial and
imperfect sources, into accurate prediction models. This work conducts a
systematic literature review focused on ML-based TSP techniques, aiming to
perform an in-depth analysis of the state of the art, thus gaining insights
regarding future avenues of research. To that end, we analyze 29 primary
studies published from 2006 to 2020, which have been identified through a
systematic and documented process. This paper addresses five research questions
addressing variations in ML-based TSP techniques and feature sets for training
and testing ML models, alternative metrics used for evaluating the techniques,
the performance of techniques, and the reproducibility of the published
studies
Requirements Prioritization Based on Benefit and Cost Prediction: An Agenda for Future Research
In early phases of the software cycle, requirements
prioritization necessarily relies on the specified
requirements and on predictions of benefit and cost of
individual requirements. This paper presents results of
a systematic review of literature, which investigates
how existing methods approach the problem of
requirements prioritization based on benefit and cost.
From this review, it derives a set of under-researched
issues which warrant future efforts and sketches an
agenda for future research in this area
Cancer gene prioritization by integrative analysis of mRNA expression and DNA copy number data: a comparative review
A variety of genome-wide profiling techniques are available to probe
complementary aspects of genome structure and function. Integrative analysis of
heterogeneous data sources can reveal higher-level interactions that cannot be
detected based on individual observations. A standard integration task in
cancer studies is to identify altered genomic regions that induce changes in
the expression of the associated genes based on joint analysis of genome-wide
gene expression and copy number profiling measurements. In this review, we
provide a comparison among various modeling procedures for integrating
genome-wide profiling data of gene copy number and transcriptional alterations
and highlight common approaches to genomic data integration. A transparent
benchmarking procedure is introduced to quantitatively compare the cancer gene
prioritization performance of the alternative methods. The benchmarking
algorithms and data sets are available at http://intcomp.r-forge.r-project.orgComment: PDF file including supplementary material. 9 pages. Preprin
Test case prioritization approaches in regression testing: A systematic literature review
Context Software quality can be assured by going through software testing process. However, software testing phase is an expensive process as it consumes a longer time. By scheduling test cases execution order through a prioritization approach, software testing efficiency can be improved especially during regression testing. Objective It is a notable step to be taken in constructing important software testing environment so that a system's commercial value can increase. The main idea of this review is to examine and classify the current test case prioritization approaches based on the articulated research questions. Method Set of search keywords with appropriate repositories were utilized to extract most important studies that fulfill all the criteria defined and classified under journal, conference paper, symposiums and workshops categories. 69 primary studies were nominated from the review strategy. Results There were 40 journal articles, 21 conference papers, three workshop articles, and five symposium articles collected from the primary studies. As for the result, it can be said that TCP approaches are still broadly open for improvements. Each approach in TCP has specified potential values, advantages, and limitation. Additionally, we found that variations in the starting point of TCP process among the approaches provide a different timeline and benefit to project manager to choose which approaches suite with the project schedule and available resources. Conclusion Test case prioritization has already been considerably discussed in the software testing domain. However, it is commonly learned that there are quite a number of existing prioritization techniques that can still be improved especially in data used and execution process for each approach
Recommended from our members
Where Are My Intelligent Assistant's Mistakes? A Systematic Testing Approach
Intelligent assistants are handling increasingly critical tasks, but until now, end users have had no way to systematically assess where their assistants make mistakes. For some intelligent assistants, this is a serious problem: if the assistant is doing work that is important, such as assisting with qualitative research or monitoring an elderly parent’s safety, the user may pay a high cost for unnoticed mistakes. This paper addresses the problem with WYSIWYT/ML (What You See Is What You Test for Machine Learning), a human/computer partnership that enables end users to systematically test intelligent assistants. Our empirical evaluation shows that WYSIWYT/ML helped end users find assistants’ mistakes significantly more effectively than ad hoc testing. Not only did it allow users to assess an assistant’s work on an average of 117 predictions in only 10 minutes, it also scaled to a much larger data set, assessing an assistant’s work on 623 out of 1,448 predictions using only the users’ original 10 minutes’ testing effort
- …