35 research outputs found

    Faster Mutation Analysis via Equivalence Modulo States

    Full text link
    Mutation analysis has many applications, such as asserting the quality of test suites and localizing faults. One important bottleneck of mutation analysis is scalability. The latest work explores the possibility of reducing the redundant execution via split-stream execution. However, split-stream execution is only able to remove redundant execution before the first mutated statement. In this paper we try to also reduce some of the redundant execution after the execution of the first mutated statement. We observe that, although many mutated statements are not equivalent, the execution result of those mutated statements may still be equivalent to the result of the original statement. In other words, the statements are equivalent modulo the current state. In this paper we propose a fast mutation analysis approach, AccMut. AccMut automatically detects the equivalence modulo states among a statement and its mutations, then groups the statements into equivalence classes modulo states, and uses only one process to represent each class. In this way, we can significantly reduce the number of split processes. Our experiments show that our approach can further accelerate mutation analysis on top of split-stream execution with a speedup of 2.56x on average.Comment: Submitted to conferenc

    Early Failure Prediction in Software Programs: Dimensionality Reduction Kernel

    Get PDF
    The aim of this paper is to build an online failure prediction classifier for monitoring the behavior of programs. The classifier predicts the termination state of the program execution paths as failing or passing. This could be achieved by mapping each execution path as a vector into a feature space whose dimensions represent common sub-paths amongst failing and passing execution paths. The main contribution of this paper is to treat the failure prediction problem as a classification task of execution paths in a customized feature space. The main dilemma is the size and the number of space dimensions, affecting the speed of the classifier. The size of the dimensions could be reduced by shortening the length of the common sub-paths, used as the space dimensions. The length of common sub-paths is affected by repeated patterns in program executions. Replacing the consecutively repeated patterns with only a single iteration in execution paths, reduces the size of the common sub-paths. The number of dimensions could be reduced by removing dimensions which have projection onto others. This paper proposes two kernels which measure similarity amongst execution paths in an implicit feature space with reduced dimensionality. Our experiments demonstrate a significant reduction in time overhead of the failure prediction classifier while preserving accuracy

    A Study of Equivalent and Stubborn Mutation Operators using Human Analysis of Equivalence

    Get PDF
    Though mutation testing has been widely studied for more than thirty years, the prevalence and properties of equivalent mutants remain largely unknown. We report on the causes and prevalence of equivalent mutants and their relationship to stubborn mutants (those that remain undetected by a high quality test suite, yet are non-equivalent). Our results, based on manual analysis of 1,230 mutants from 18 programs, reveal a highly uneven distribution of equivalence and stubbornness. For example, the ABS class and half UOI class generate many equivalent and almost no stubborn mutants, while the LCR class generates many stubborn and few equivalent mutants. We conclude that previous test effectiveness studies based on fault seeding could be skewed, while developers of mutation testing tools should prioritise those operators that we found generate disproportionately many stubborn (and few equivalent) mutants

    Visualising the Global Structure of Search Landscapes: Genetic Improvement as a Case Study

    Get PDF
    The search landscape is a common metaphor to describe the structure of computational search spaces. Different landscape metrics can be computed and used to predict search difficulty. Yet, the metaphor falls short in visualisation terms because it is hard to represent complex landscapes, both in terms of size and dimensionality. This paper combines Local Optima Networks, as a compact representation of the global structure of a search space, and dimensionality reduction, using the t-Distributed Stochastic Neighbour Embedding (t-SNE) algorithm, in order to both bring the metaphor to life and convey new insight into the search process. As a case study, two benchmark programs, under a Genetic Improvement bug-fixing scenario, are analysed and visualised using the proposed method. Local Optima Networks for both iterated local search and a hybrid genetic algorithm, across different neighbourhoods, are compared, highlighting the differences in how the landscape is explored

    Testbench qualification of SystemC TLM protocols through Mutation Analysis

    Get PDF
    Transaction-level modeling (TLM) has become the de-facto reference modeling style for system-level design and verification of embedded systems. It allows designers to implement high-level communication protocols for simulations up to 1000x faster than at register-transfer level (RTL). To guarantee interoperability between TLM IP suppliers and users, designers implement the TLM communication protocols by relying on a reference standard, such as the standard OSCI for SystemC TLM. Functional correctness of such protocols as well as their compliance to the reference TLM standard are usually verified through user-defined testbenches, which high-quality and completeness play a key role for an efficient TLM design and verification flow. This article presents a methodology to apply mutation analysis, a technique applied in literature for SW testing, for measuring the testbench quality in verifying TLM protocols. In particular, the methodology aims at (i) qualifying the testbenches by considering both the TLM protocol correctness and their compliance to a defined standard (i.e., OSCI TLM), (ii) optimizing the simulation time during mutation analysis by avoiding mutation redundancies, and (iii) driving the designers in the testbench improvement. Experimental results on benchmarks of different complexity and architectural characteristics are reported to analyze the methodology applicability

    A path-aware approach to mutant reduction in mutation testing

    Get PDF
    Context: Mutation testing, which systematically generates a set of mutants by seeding various faults into the base program under test, is a popular technique for evaluating the effectiveness of a testing method. However, it normally requires the execution of a large amount of mutants and thus incurs a high cost. Objective: A common way to decrease the cost of mutation testing is mutant reduction, which selects a subset of representative mutants. In this paper, we propose a new mutant reduction approach from the perspective of program structure. Method: Our approach attempts to explore path information of the program under test, and select mutants that are as diverse as possible with respect to the paths they cover. We define two path-aware heuristic rules, namely module-depth and loop-depth rules, and combine them with statement- and operator-based mutation selection to develop four mutant reduction strategies. Results: We evaluated the cost-effectiveness of our mutant reduction strategies against random mutant selection on 11 real-life C programs with varying sizes and sampling ratios. Our empirical studies show that two of our mutant reduction strategies, which primarily rely on the path-aware heuristic rules, are more effective and systematic than pure random mutant selection strategy in terms of selecting more representative mutants. In addition, among all four strategies, the one giving loop-depth the highest priority has the highest effectiveness. Conclusion: In general, our path-aware approach can reduce the number of mutants without jeopardizing its effectiveness, and thus significantly enhance the overall cost-effectiveness of mutation testing. Our approach is particularly useful for the mutation testing on large-scale complex programs that normally involve a huge amount of mutants with diverse fault characteristics
    corecore