7 research outputs found

    Comparative Analysis of Test Case Prioritization Using Ant Colony Optimization Algorithm and Genetic Algorithm

    Get PDF
    After it is published, every software system will get an upgrade, requiring it to adapt to meet the ever-changing client needs thus regression testing becomes one of the most important operations in any software system. As it is too expensive to repeat the execution of all the test cases available from a previous version of the software system, numerous ways to optimize the regression test suite have evolved, one of which is test case prioritizing. This study was carried out to test and compare the effectiveness of evolutionary algorithms and swarm intelligence algorithms, represented by the Genetic Algorithm (GA) and Ant Colony Optimization (ACO) algorithms. They will be implemented to find the Average Percentage Fault Detected (APFD), execution time, and Big O notation, as these are critical aspects in software testing to ensure high-quality products are produced on time. This study employs data from two separate investigations on comparable issues, denoted as Case Study One and Case Study Two. TCP has been extensively used in recent years, but not much research has been conducted to analyze and evaluate the performance of genetic algorithms (GA) and ant colony optimization (ACO) in a test case prioritization context. The algorithms were compared using APFD, execution time. The APFD and execution time values of 50, 100, 150, and 200 iterations for ACO and GA for both datasets are conducted. Both algorithms were determined to work on O() notation, which indicates they should scale up their execution process similarly on different input scales. Both algorithms performed well in their respective roles. ACO has shown to be more valuable than GA in terms of APFD and GA has shown to be more valuable than ACO in terms of execution time

    SOFTWARE UNDER TEST DALAM PENELITIAN SOFTWARE TESTING: SEBUAH REVIEW

    Get PDF
    Software under Test (SUT) is an essential aspect of software testing research activities. Preparation of the SUT is not simple. It requires accuracy, completeness and will affect the quality of the research conducted. Currently, there are several ways to utilize an SUT in software testing research: building an own SUT, utilization of open source to build an SUT, and SUT from the repository utilization. This article discusses the results of SUT identification in many software testing studies. The research is conducted in a systematic literature review (SLR) using the Kitchenham protocol. The review process is carried out on 86 articles published in 2017-2020. The article was selected after two selection stages: the Inclusion and Exclusion Criteria and the quality assessment. The study results show that the trend of using open source is very dominant. Some researchers use open source as the basis for developing SUT, while others use SUT from a repository that provides ready-to-use SUT. In this context, utilization of the SUT from the software infrastructure repository (SIR) and Defect4J are the most significant choice of researchers

    Cost-effective simulation-based test selection in self-driving cars software

    Get PDF
    Simulation environments are essential for the continuous development of complex cyber-physical systems such as self-driving cars (SDCs). Previous results on simulation-based testing for SDCs have shown that many automatically generated tests do not strongly contribute to identification of SDC faults, hence do not contribute towards increasing the quality of SDCs. Because running such "uninformative" tests generally leads to a waste of computational resources and a drastic increase in the testing cost of SDCs, testers should avoid them. However, identifying "uninformative" tests before running them remains an open challenge. Hence, this paper proposes SDCScissor, a framework that leverages Machine Learning (ML) to identify SDC tests that are unlikely to detect faults in the SDC software under test, thus enabling testers to skip their execution and drastically increase the cost-effectiveness of simulation-based testing of SDCs software. Our evaluation concerning the usage of six ML models on two large datasets characterized by 22'652 tests showed that SDC-Scissor achieved a classification F1-score up to 96%. Moreover, our results show that SDC-Scissor outperformed a randomized baseline in identifying more failing tests per time unit. Webpage & Video: https://github.com/ChristianBirchler/sdc-scisso

    Optimizing regression testing with AHP-TOPSIS metric system for effective technical debt evaluation

    Get PDF
    Regression testing is essential to ensure that the actual software product confirms the expected requirements following modification. However, it can be costly and time-consuming. To address this issue, various approaches have been proposed for selecting test cases that provide adequate coverage of the modified software. Nonetheless, problems related to omitting and/or rerunning unnecessary test cases continue to pose challenges, particularly with regard to technical debt (TD) resulting from code coverage shortcomings and/or overtesting. In the case of testing-related shortcomings, incurring TD may result in cost and time savings in the short run, but it can lead to future maintenance and testing expenses. Most prior studies have treated test case selection as a single-objective or two-objective optimization problem. This study introduces a multi-objective decision-making approach to quantify and evaluate TD in regression testing. The proposed approach combines the analytic-hierarchy-process (AHP) method and the technique of order preference by similarity to an ideal solution (TOPSIS) to select the most ideal test cases in terms of objective values defined by the test cost, code coverage, and test risk. This approach effectively manages the software regression testing problems. The AHP method was used to eliminate subjective bias when optimizing objective weights, while the TOPSIS method was employed to evaluate and select test-case alternatives based on TD. The effectiveness of this approach was compared to that of a specific multi-objective optimization method and a standard coverage methodology. Unlike other approaches, our proposed approach always accepts solutions based on balanced decisions by considering modifications and using risk analysis and testing costs against potential technical debt. The results demonstrate that our proposed approach reduces both TD and regression testing efforts

    A Test Case Prioritization Genetic Algorithm Guided by the Hypervolume Indicator

    No full text
    Regression testing is performed during maintenance activities to assess whether the unchanged parts of a software behave as intended. To reduce its cost, test case prioritization techniques can be used to schedule the execution of the available test cases to increase their ability to reveal regression faults earlier. Optimal test ordering can be determined using various techniques, such as greedy algorithms and meta-heuristics, and optimizing multiple fitness functions, such as the average percentage of statement and branch coverage. These fitness functions condense the cumulative coverage scores achieved when incrementally running test cases in a given ordering using Area Under Curve (AUC) metrics. In this paper, we notice that AUC metrics represent a bi-dimensional (simplified) version of the hypervolume metric, which is widely used in many-objective optimization. Thus, we propose a Hypervolume-based Genetic Algorithm, namely HGA, to solve the Test Case Prioritization problem when using multiple test coverage criteria. An empirical study conducted with respect to five state-of-the-art techniques shows that (i) HGA is more cost-effective, (ii) HGA improves the efficiency of Test Case Prioritization, (iii) HGA has a stronger selective pressure when dealing with more than three criteria

    A Test Case Prioritization Genetic Algorithm guided by the Hypervolume Indicator

    No full text
    Regression testing is performed during maintenance activities to assess whether the unchanged parts of a software behave as intended. To reduce its cost, test case prioritization techniques can be used to schedule the execution of the available test cases to increase their ability to reveal regression faults earlier. Optimal test ordering can be determined using various techniques, such as greedy algorithms and meta-heuristics, and optimizing multiple fitness functions, such as the average percentage of statement and branch coverage. These fitness functions condense the cumulative coverage scores achieved when incrementally running test cases in a given ordering using Area Under Curve (AUC) metrics. In this paper, we notice that AUC metrics represent a bi-dimensional (simplified) version of the hypervolume metric, which is widely used in many-objective optimization. Thus, we propose a Hypervolume-based Genetic Algorithm, namely HGA, to solve the Test Case Prioritization problem when using multiple test coverage criteria. An empirical study conducted with respect to five state-of-the-art techniques shows that (i) HGA is more cost-effective, (ii) HGA improves the efficiency of Test Case Prioritization, (iii) HGA has a stronger selective pressure when dealing with more than three criteria.Accepted Author ManuscriptSoftware Engineerin

    A Test Case Prioritization Genetic Algorithm Guided by the Hypervolume Indicator

    No full text
    Regression testing is performed during maintenance activities to assess whether the unchanged parts of a software behave as intended. To reduce its cost, test case prioritization techniques can be used to schedule the execution of the available test cases to increase their ability to reveal regression faults earlier. Optimal test ordering can be determined using various techniques, such as greedy algorithms and meta-heuristics, and optimizing multiple fitness functions, such as the average percentage of statement and branch coverage. These fitness functions condense the cumulative coverage scores achieved when incrementally running test cases in a given ordering using Area Under Curve (AUC) metrics. In this paper, we notice that AUC metrics represent a bi-dimensional (simplified) version of the hypervolume metric, which is widely used in many-objective optimization. Thus, we propose a Hypervolume-based Genetic Algorithm, namely HGA, to solve the Test Case Prioritization problem when using multiple test coverage criteria. An empirical study conducted with respect to five state-of-the-art techniques shows that (i) HGA is more cost-effective, (ii) HGA improves the efficiency of Test Case Prioritization, (iii) HGA has a stronger selective pressure when dealing with more than three criteria
    corecore