16,253 research outputs found

    Minimization of Test Cases and Fault Detection Effectiveness Improvement through Modified Reduction with Selective Redundancy Algorithm

    Get PDF
    In any software development lifecycle, testing is necessary to guarantee the quality of the end product. As software grows, the size of test suites grows too. Due to this grows, maintaining of test suites become more difficult. Therefore, test suite minimization techniques are required to control the test suite size. One way of doing this is by ensuring that the set of test suite includes the important test cases with all redundancies in test cases eliminated. Most test suite minimization techniques remove redundant test cases with respect to a particular coverage criterion at a time. A potential drawback of these techniques is that they may result in loss of test suite coverage with respect to other coverage criteria, thus affecting the ability of reduced test suite in detecting faults. To overcome this weakness, this research objective is to minimize the test suite by selectively including coverage redundancy while improving fault detection effectiveness. To achieve such goal, this research modifies and improves the Reduction with Selective Redundancy (RSR) algorithm. In the modify algorithm, test cases would be selected according to the branch coverage if they covered different branch combination. Then the algorithm gathers all the test cases based on the definition occurrence and def-use pair if they cover same definition occurrence of one variable but they don’t cover def-use pair of the same variable. Among these selected test cases, the algorithm identifies the redundant test cases based on definition occurrence, if they cover a similar combination of branch coverage except in one branch and also if the test cases cover a similar definition occurrence . The results show the algorithm used in this research can reduce the test suite size as well as significantly improve the fault detection effectiveness. The fault detection loss of reduced suite size was significantly less than the amount of suite size reduction. Moreover, the results reveal that test suit minimization based on branch combination is effective in term of faults detection

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    Rightsizing LISA

    Get PDF
    The LISA science requirements and conceptual design have been fairly stable for over a decade. In the interest of reducing costs, the LISA Project at NASA has looked for simplifications of the architecture, at downsizing of subsystems, and at descopes of the entire mission. This is a natural activity of the formulation phase, and one that is particularly timely in the current NASA budgetary context. There is, and will continue to be, enormous pressure for cost reduction from both ESA and NASA, reviewers and the broader research community. Here, the rationale for the baseline architecture is reviewed, and recent efforts to find simplifications and other reductions that might lead to savings are reported. A few possible simplifications have been found in the LISA baseline architecture. In the interest of exploring cost sensitivity, one moderate and one aggressive descope have been evaluated; the cost savings are modest and the loss of science is not.Comment: To be published in Classical and Quantum Gravity; Proceedings of the Seventh International LISA Symposium, Barcelona, Spain, 16-20 Jun. 2008; 10 pages, 1 figure, 3 table

    Assessment of C++ object-oriented mutation operators: A selective mutation approach

    Get PDF
    Mutation testing is an effective but costly testing technique. Several studies have observed that some mutants can be redundant and therefore removed without affecting its effectiveness. Similarly, some mutants may be more effective than others in guiding the tester on the creation of high‐quality test cases. On the basis of these findings, we present an assessment of C++ class mutation operators by classifying them into 2 rankings: the first ranking sorts the operators on the basis of their degree of redundancy and the second regarding the quality of the tests they help to design. Both rankings are used in a selective mutation study analysing the trade‐off between the reduction achieved and the effectiveness when using a subset of mutants. Experimental results consistently show that leveraging the operators at the top of the 2 rankings, which are different, lead to a significant reduction in the number of mutants with a minimum loss of effectiveness

    Solving the riddle of codon usage preferences: a test for translational selection

    Get PDF
    Translational selection is responsible for the unequal usage of synonymous codons in protein coding genes in a wide variety of organisms. It is one of the most subtle and pervasive forces of molecular evolution, yet, establishing the underlying causes for its idiosyncratic behaviour across living kingdoms has proven elusive to researchers over the past 20 years. In this study, a statistical model for measuring translational selection in any given genome is developed, and the test is applied to 126 fully sequenced genomes, ranging from archaea to eukaryotes. It is shown that tRNA gene redundancy and genome size are interacting forces that ultimately determine the action of translational selection, and that an optimal genome size exists for which this kind of selection is maximal. Accordingly, genome size also presents upper and lower boundaries beyond which selection on codon usage is not possible. We propose a model where the coevolution of genome size and tRNA genes explains the observed patterns in translational selection in all living organisms. This model finally unifies our understanding of codon usage across prokaryotes and eukaryotes. Helicobacter pylori, Saccharomyces cerevisiae and Homo sapiens are codon usage paradigms that can be better understood under the proposed model

    Solving the riddle of codon usage preferences: a test for translational selection

    Get PDF
    Translational selection is responsible for the unequal usage of synonymous codons in protein coding genes in a wide variety of organisms. It is one of the most subtle and pervasive forces of molecular evolution, yet, establishing the underlying causes for its idiosyncratic behaviour across living kingdoms has proven elusive to researchers over the past 20 years. In this study, a statistical model for measuring translational selection in any given genome is developed, and the test is applied to 126 fully sequenced genomes, ranging from archaea to eukaryotes. It is shown that tRNA gene redundancy and genome size are interacting forces that ultimately determine the action of translational selection, and that an optimal genome size exists for which this kind of selection is maximal. Accordingly, genome size also presents upper and lower boundaries beyond which selection on codon usage is not possible. We propose a model where the coevolution of genome size and tRNA genes explains the observed patterns in translational selection in all living organisms. This model finally unifies our understanding of codon usage across prokaryotes and eukaryotes. Helicobacter pylori, Saccharomyces cerevisiae and Homo sapiens are codon usage paradigms that can be better understood under the proposed model

    Faster Mutation Analysis via Equivalence Modulo States

    Full text link
    Mutation analysis has many applications, such as asserting the quality of test suites and localizing faults. One important bottleneck of mutation analysis is scalability. The latest work explores the possibility of reducing the redundant execution via split-stream execution. However, split-stream execution is only able to remove redundant execution before the first mutated statement. In this paper we try to also reduce some of the redundant execution after the execution of the first mutated statement. We observe that, although many mutated statements are not equivalent, the execution result of those mutated statements may still be equivalent to the result of the original statement. In other words, the statements are equivalent modulo the current state. In this paper we propose a fast mutation analysis approach, AccMut. AccMut automatically detects the equivalence modulo states among a statement and its mutations, then groups the statements into equivalence classes modulo states, and uses only one process to represent each class. In this way, we can significantly reduce the number of split processes. Our experiments show that our approach can further accelerate mutation analysis on top of split-stream execution with a speedup of 2.56x on average.Comment: Submitted to conferenc

    LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems

    Full text link
    Mutation testing is a well-studied method for increasing the quality of a test suite. We designed LittleDarwin as a mutation testing framework able to cope with large and complex Java software systems, while still being easily extensible with new experimental components. LittleDarwin addresses two existing problems in the domain of mutation testing: having a tool able to work within an industrial setting, and yet, be open to extension for cutting edge techniques provided by academia. LittleDarwin already offers higher-order mutation, null type mutants, mutant sampling, manual mutation, and mutant subsumption analysis. There is no tool today available with all these features that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on Fundamentals of Software Engineerin
    corecore