2,455 research outputs found
Model-Based Security Testing
Security testing aims at validating software system requirements related to
security properties like confidentiality, integrity, authentication,
authorization, availability, and non-repudiation. Although security testing
techniques are available for many years, there has been little approaches that
allow for specification of test cases at a higher level of abstraction, for
enabling guidance on test identification and specification as well as for
automated test generation.
Model-based security testing (MBST) is a relatively new field and especially
dedicated to the systematic and efficient specification and documentation of
security test objectives, security test cases and test suites, as well as to
their automated or semi-automated generation. In particular, the combination of
security modelling and test generation approaches is still a challenge in
research and of high interest for industrial applications. MBST includes e.g.
security functional testing, model-based fuzzing, risk- and threat-oriented
testing, and the usage of security test patterns. This paper provides a survey
on MBST techniques and the related models as well as samples of new methods and
tools that are under development in the European ITEA2-project DIAMONDS.Comment: In Proceedings MBT 2012, arXiv:1202.582
Potential Errors and Test Assessment in Software Product Line Engineering
Software product lines (SPL) are a method for the development of variant-rich
software systems. Compared to non-variable systems, testing SPLs is extensive
due to an increasingly amount of possible products. Different approaches exist
for testing SPLs, but there is less research for assessing the quality of these
tests by means of error detection capability. Such test assessment is based on
error injection into correct version of the system under test. However to our
knowledge, potential errors in SPL engineering have never been systematically
identified before. This article presents an overview over existing paradigms
for specifying software product lines and the errors that can occur during the
respective specification processes. For assessment of test quality, we leverage
mutation testing techniques to SPL engineering and implement the identified
errors as mutation operators. This allows us to run existing tests against
defective products for the purpose of test assessment. From the results, we
draw conclusions about the error-proneness of the surveyed SPL design paradigms
and how quality of SPL tests can be improved.Comment: In Proceedings MBT 2015, arXiv:1504.0192
Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?
Mutation testing is a means to assess the effectiveness of a test suite and
its outcome is considered more meaningful than code coverage metrics. However,
despite several optimizations, mutation testing requires a significant
computational effort and has not been widely adopted in industry. Therefore, we
study in this paper whether test effectiveness can be approximated using a more
light-weight approach. We hypothesize that a test case is more likely to detect
faults in methods that are close to the test case on the call stack than in
methods that the test case accesses indirectly through many other methods.
Based on this hypothesis, we propose the minimal stack distance between test
case and method as a new test measure, which expresses how close any test case
comes to a given method, and study its correlation with test effectiveness. We
conducted an empirical study with 21 open-source projects, which comprise in
total 1.8 million LOC, and show that a correlation exists between stack
distance and test effectiveness. The correlation reaches a strength up to 0.58.
We further show that a classifier using the minimal stack distance along with
additional easily computable measures can predict the mutation testing result
of a method with 92.9% precision and 93.4% recall. Hence, such a classifier can
be taken into consideration as a light-weight alternative to mutation testing
or as a preceding, less costly step to that.Comment: EASE 201
LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems
Mutation testing is a well-studied method for increasing the quality of a
test suite. We designed LittleDarwin as a mutation testing framework able to
cope with large and complex Java software systems, while still being easily
extensible with new experimental components. LittleDarwin addresses two
existing problems in the domain of mutation testing: having a tool able to work
within an industrial setting, and yet, be open to extension for cutting edge
techniques provided by academia. LittleDarwin already offers higher-order
mutation, null type mutants, mutant sampling, manual mutation, and mutant
subsumption analysis. There is no tool today available with all these features
that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on
Fundamentals of Software Engineerin
Semantic mutation testing
This is the Pre-print version of the Article. The official published version can be obtained from the link below - Copyright @ 2011 ElsevierMutation testing is a powerful and flexible test technique. Traditional mutation testing makes a small change to the syntax of a description (usually a program) in order to create a mutant. A test suite is considered to be good if it distinguishes between the original description and all of the (functionally non-equivalent) mutants. These mutants can be seen as representing potential small slips and thus mutation testing aims to produce a test suite that is good at finding such slips. It has also been argued that a test suite that finds such small changes is likely to find larger changes. This paper describes a new approach to mutation testing, called semantic mutation testing. Rather than mutate the description, semantic mutation testing mutates the semantics of the language in which the description is written. The mutations of the semantics of the language represent possible misunderstandings of the description language and thus capture a different class of faults. Since the likely misunderstandings are highly context dependent, this context should be used to determine which semantic mutants should be produced. The approach is illustrated through examples with statecharts and C code. The paper also describes a semantic mutation testing tool for C and the results of experiments that investigated the nature of some semantic mutation operators for C
Dynamic Analysis can be Improved with Automatic Test Suite Refactoring
Context: Developers design test suites to automatically verify that software
meets its expected behaviors. Many dynamic analysis techniques are performed on
the exploitation of execution traces from test cases. However, in practice,
there is only one trace that results from the execution of one manually-written
test case.
Objective: In this paper, we propose a new technique of test suite
refactoring, called B-Refactoring. The idea behind B-Refactoring is to split a
test case into small test fragments, which cover a simpler part of the control
flow to provide better support for dynamic analysis.
Method: For a given dynamic analysis technique, our test suite refactoring
approach monitors the execution of test cases and identifies small test cases
without loss of the test ability. We apply B-Refactoring to assist two existing
analysis tasks: automatic repair of if-statements bugs and automatic analysis
of exception contracts.
Results: Experimental results show that test suite refactoring can
effectively simplify the execution traces of the test suite. Three real-world
bugs that could previously not be fixed with the original test suite are fixed
after applying B-Refactoring; meanwhile, exception contracts are better
verified via applying B-Refactoring to original test suites.
Conclusions: We conclude that applying B-Refactoring can effectively improve
the purity of test cases. Existing dynamic analysis tasks can be enhanced by
test suite refactoring
Jumble Java Byte Code to Measure the Effectiveness of Unit Tests
Jumble is a byte code level mutation testing tool for Java which inter-operates with JUnit. It has been designed to operate in an industrial setting with large projects. Heuristics have been included to speed the checking of mutations, for example, noting which test fails for each mutation and running this first in subsequent mutation checks. Significant effort has been put into ensuring that it can test code which uses custom class loading and reflection. This requires careful attention to class path handling and coexistence with foreign class-loaders. Jumble is currently used on a continuous basis within an agile programming environment with approximately 370,000 lines of Java code under source control. This checks out project code every fifteen minutes and runs an incremental set of unit tests and mutation tests for modified classes. Jumble is being made available as open source
- …