8 research outputs found

    A hybrid weight-based and string distances using particle swarm optimization for prioritizing test cases

    Get PDF
    Regression testing is concerned with testing the modified version of software. However, to re-test entire test cases require significant cost and time. To reduce the cost and time, higher average percentage fault detection (APFD) rate and faster execution to kill fault mutant are required. Therefore, to achieve these two requirements, an improvement to existing Test Case Prioritization (TCP) technique for a more effective regression testing is offered. A weight-hybrid string distance technique and prioritization using particle swarm optimization (PSO) is proposed. Distance between test cases and weight for each test case, and hybridization of both values for weight-hybrid string distance are calculated. This experiment was evaluated using Siemens dataset. Result obtained from this experiment shows that weight-hybrid string distance is capable of improving APFD values whereby APFD value for hybrid TFIDF-JC is equal to 97.37%, which shows the highest improvement by 4.74% as compared to non-hybrid JC. Meanwhile, for percentage of test cases needed to kill 100% fault mutants, hybrid TFIDF-M yields the lowest value, 22.88%, which shows a 76% improvement as compared to its non-hybrid string distance

    Model-based test case prioritization using selective and even-spread count-based methods with scrutinized ordering criterion

    Get PDF
    Regression testing is crucial in ensuring that modifications made did not introduce any adverse effect on the software being modified. However, regression testing suffers from execution cost and time consumption problems. Test case prioritization (TCP) is one of the techniques used to overcome these issues by re-ordering test cases based on their priorities. Model-based TCP (MB-TCP) is an approach in TCP where the software models are manipulated to perform prioritization. The issue with MB-TCP is that most of the existing approaches do not provide satisfactory faults detection capability. Besides, their granularity of test selection criteria is not very good and this can affect prioritization effectiveness. This study proposes an MB-TCP approach that can improve the faults detection performance of regression testing. It combines the implementation of two existing approaches from the literature while incorporating an additional ordering criterion to boost prioritization efficacy. A detailed empirical study is conducted with the aim to evaluate and compare the performance of the proposed approach with the selected existing approaches from the literature using the average of the percentage of faults detected (APFD) metric. Three web applications were used as the objects of study to obtain the required test suites that contained the tests to be prioritized. From the result obtained, the proposed approach yields the highest APFD values over other existing approaches which are 91%, 86% and 91% respectively for the three web applications. These higher APFD values signify that the proposed approach is very effective in revealing faults early during testing. They also show that the proposed approach can improve the faults detection performance of regression testing

    Test case prioritization using test similarities

    No full text
    Abstract A classical heuristic in software testing is to reward diversity, which implies that a higher priority must be assigned to test cases that differ the most from those already prioritized. This approach is commonly known as similarity-based test prioritization (SBTP) and can be realized using a variety of techniques. The objective of our study is to investigate whether SBTP is more effective at finding defects than random permutation, as well as determine which SBTP implementations lead to better results. To achieve our objective, we implemented five different techniques from the literature and conducted an experiment using the defects4j dataset, which contains 395 real faults from six real-world open-source Java programs. Findings indicate that running the most dissimilar test cases early in the process is largely more effective than random permutation (Vargha–Delaney A [VDA]: 0.76–0.99 observed using normalized compression distance). No technique was found to be superior with respect to the effectiveness. Locality-sensitive hashing was, to a small extent, less effective than other SBTP techniques (VDA: 0.38 observed in comparison to normalized compression distance), but its speed largely outperformed the other techniques (i.e., it was approximately 5–111 times faster). Our results bring to mind the well-known adage, “don’t put all your eggs in one basket”. To effectively consume a limited testing budget, one should spread it evenly across different parts of the system by running the most dissimilar test cases early in the testing process

    Inspecting Code Churns to Prioritize Test Cases

    No full text
    Within the context of software evolution, due to time-to-market pressure, it is not uncommon that a company has not enough time and/or resources to re-execute the whole test suite on the new software version, to check for non-regression. To face this issue, many Regression Test Prioritization techniques have been proposed, aimed at ranking test cases in a way that tests more likely to expose faults have higher priority. Some of these techniques exploit code churn metrics, i.e. some quantification of code changes between two subsequent versions of a software artifact, which have been proven to be effective indicators of defect-prone components. In this paper, we first present three new Regression Test Prioritization strategies, based on a novel code churn metric, that we empirically assessed on an open source software system. Results highlighted that the proposal is promising, but that it might be further improved by a more detailed analysis on the nature of the changes introduced between two subsequent code versions. To this aim, in this paper we also sketch a more refined approach we are currently investigating, that quantifies changes in a code base at a finer grained level. Intuitively, we seek to prioritize tests that stress more fault-prone changes (e.g., structural changes in the control flow), w.r.t. those that are less likely to introduce errors (e.g., the renaming of a variable). To do so, we propose the exploitation of the Abstract Syntax Tree (AST) representation of source code, and to quantify differences between ASTs by means of specifically designed Tree Kernel functions, a type of similarity measure for tree-based data structures, which have shown to be very effective in other domains, thanks to their customizability
    corecore