6,901 research outputs found
Semantic mutation testing
This is the Pre-print version of the Article. The official published version can be obtained from the link below - Copyright @ 2011 ElsevierMutation testing is a powerful and flexible test technique. Traditional mutation testing makes a small change to the syntax of a description (usually a program) in order to create a mutant. A test suite is considered to be good if it distinguishes between the original description and all of the (functionally non-equivalent) mutants. These mutants can be seen as representing potential small slips and thus mutation testing aims to produce a test suite that is good at finding such slips. It has also been argued that a test suite that finds such small changes is likely to find larger changes. This paper describes a new approach to mutation testing, called semantic mutation testing. Rather than mutate the description, semantic mutation testing mutates the semantics of the language in which the description is written. The mutations of the semantics of the language represent possible misunderstandings of the description language and thus capture a different class of faults. Since the likely misunderstandings are highly context dependent, this context should be used to determine which semantic mutants should be produced. The approach is illustrated through examples with statecharts and C code. The paper also describes a semantic mutation testing tool for C and the results of experiments that investigated the nature of some semantic mutation operators for C
LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems
Mutation testing is a well-studied method for increasing the quality of a
test suite. We designed LittleDarwin as a mutation testing framework able to
cope with large and complex Java software systems, while still being easily
extensible with new experimental components. LittleDarwin addresses two
existing problems in the domain of mutation testing: having a tool able to work
within an industrial setting, and yet, be open to extension for cutting edge
techniques provided by academia. LittleDarwin already offers higher-order
mutation, null type mutants, mutant sampling, manual mutation, and mutant
subsumption analysis. There is no tool today available with all these features
that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on
Fundamentals of Software Engineerin
Mutation testing from probabilistic finite state machines
Mutation testing traditionally involves mutating a program in order to produce a set of mutants and using these mutants in order to either estimate the effectiveness of a test suite or to drive test generation. Recently, however, this approach has been applied to specifications such as those written as finite state machines. This paper extends mutation testing to finite state machine models in which transitions have associated probabilities. The paper describes several ways of mutating a probabilistic finite state machine (PFSM) and shows how test sequences that distinguish between a PFSM and its mutants can be generated. Testing then involves applying each test sequence multiple times, observing the resultant output sequences and using results from statistical sampling theory in order to compare the observed frequency of each output sequence with that expected
Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?
Mutation testing is a means to assess the effectiveness of a test suite and
its outcome is considered more meaningful than code coverage metrics. However,
despite several optimizations, mutation testing requires a significant
computational effort and has not been widely adopted in industry. Therefore, we
study in this paper whether test effectiveness can be approximated using a more
light-weight approach. We hypothesize that a test case is more likely to detect
faults in methods that are close to the test case on the call stack than in
methods that the test case accesses indirectly through many other methods.
Based on this hypothesis, we propose the minimal stack distance between test
case and method as a new test measure, which expresses how close any test case
comes to a given method, and study its correlation with test effectiveness. We
conducted an empirical study with 21 open-source projects, which comprise in
total 1.8 million LOC, and show that a correlation exists between stack
distance and test effectiveness. The correlation reaches a strength up to 0.58.
We further show that a classifier using the minimal stack distance along with
additional easily computable measures can predict the mutation testing result
of a method with 92.9% precision and 93.4% recall. Hence, such a classifier can
be taken into consideration as a light-weight alternative to mutation testing
or as a preceding, less costly step to that.Comment: EASE 201
Dynamic Mutant Subsumption Analysis using LittleDarwin
Many academic studies in the field of software testing rely on mutation
testing to use as their comparison criteria. However, recent studies have shown
that redundant mutants have a significant effect on the accuracy of their
results. One solution to this problem is to use mutant subsumption to detect
redundant mutants. Therefore, in order to facilitate research in this field, a
mutation testing tool that is capable of detecting redundant mutants is needed.
In this paper, we describe how we improved our tool, LittleDarwin, to fulfill
this requirement
Measuring Coverage of Prolog Programs Using Mutation Testing
Testing is an important aspect in professional software development, both to
avoid and identify bugs as well as to increase maintainability. However,
increasing the number of tests beyond a reasonable amount hinders development
progress. To decide on the completeness of a test suite, many approaches to
assert test coverage have been suggested. Yet, frameworks for logic programs
remain scarce.
In this paper, we introduce a framework for Prolog programs measuring test
coverage using mutations. We elaborate the main ideas of mutation testing and
transfer them to logic programs. To do so, we discuss the usefulness of
different mutations in the context of Prolog and empirically evaluate them in a
new mutation testing framework on different examples.Comment: 16 pages, Accepted for presentation in WFLP 201
Performance mutation testing
Performance bugs are known to be a major threat to the success of software products. Performance tests aim to detect performance bugs by executing the program through test cases and checking whether it exhibits a noticeable performance degradation. The principles of mutation testing, a well-established testing technique for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and improve the detection power of performance tests. However, the application of mutation testing to assess performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this work, we go a step further and explore the feasibility of applying PMT at the source-code level in general-purpose languages. To do so, we revisit concepts associated with classical mutation testing, and design seven novel mutation operators to model known bug-inducing patterns. As a proof of concept, we applied traditional mutation operators as well as performance mutation operators to open-source C++ programs. The results reveal the potential of the new performance-mutants to help assess and enhance performance tests when compared to traditional mutants. A review of live mutants in these programs suggests that they can induce the design of special test inputs. In addition to these promising results, our work brings a whole new set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the area
Performance mutation testing
Performance bugs are known to be a major threat to the success of software products. Performance tests aim
to detect performance bugs by executing the program through test cases and checking whether it exhibits a
noticeable performance degradation. The principles of mutation testing, a well-established testing technique
for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and
improve the detection power of performance tests. However, the application of mutation testing to assess
performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with
numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this
work, we go a step further and explore the feasibility of applying PMT at the source-code level in general purpose languages. To do so, we revisit concepts associated with classical mutation testing, and design
seven novel mutation operators to model known bug-inducing patterns. As a proof of concept, we applied
traditional mutation operators as well as performance mutation operators to open-source C++ programs. The
results reveal the potential of the new performance-mutants to help assess and enhance performance tests
when compared to traditional mutants. A review of live mutants in these programs suggests that they can
induce the design of special test inputs. In addition to these promising results, our work brings a whole new
set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the
areaMinisterio de EconomÃa y Competitividad TIN2015-65845-C3-3-RMinisterio de EconomÃa y Competitividad RTI2018- 093608-B-C33Ministerio de EconomÃa y Competitividad BELI (TIN2015-70560-R)Ministerio de EconomÃa y Competitividad (HORATIO) RTI2018-101204-B-C2
- …