8,003 research outputs found
A Testability Analysis Framework for Non-Functional Properties
This paper presents background, the basic steps and an example for a
testability analysis framework for non-functional properties
Potential Errors and Test Assessment in Software Product Line Engineering
Software product lines (SPL) are a method for the development of variant-rich
software systems. Compared to non-variable systems, testing SPLs is extensive
due to an increasingly amount of possible products. Different approaches exist
for testing SPLs, but there is less research for assessing the quality of these
tests by means of error detection capability. Such test assessment is based on
error injection into correct version of the system under test. However to our
knowledge, potential errors in SPL engineering have never been systematically
identified before. This article presents an overview over existing paradigms
for specifying software product lines and the errors that can occur during the
respective specification processes. For assessment of test quality, we leverage
mutation testing techniques to SPL engineering and implement the identified
errors as mutation operators. This allows us to run existing tests against
defective products for the purpose of test assessment. From the results, we
draw conclusions about the error-proneness of the surveyed SPL design paradigms
and how quality of SPL tests can be improved.Comment: In Proceedings MBT 2015, arXiv:1504.0192
Automated metamorphic testing on the analyses of feature models
Copyright © 2010 Elsevier B.V. All rights reserved.Context: A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.Objective: In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.Method: We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.Results: Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.Conclusion: Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.This work has been partially supported by the European Commission(FEDER)and Spanish Government under CICYT project SETI(TIN2009-07366)and the Andalusian Government project ISABEL(TIC-2533)
Semantic mutation testing
This is the Pre-print version of the Article. The official published version can be obtained from the link below - Copyright @ 2011 ElsevierMutation testing is a powerful and flexible test technique. Traditional mutation testing makes a small change to the syntax of a description (usually a program) in order to create a mutant. A test suite is considered to be good if it distinguishes between the original description and all of the (functionally non-equivalent) mutants. These mutants can be seen as representing potential small slips and thus mutation testing aims to produce a test suite that is good at finding such slips. It has also been argued that a test suite that finds such small changes is likely to find larger changes. This paper describes a new approach to mutation testing, called semantic mutation testing. Rather than mutate the description, semantic mutation testing mutates the semantics of the language in which the description is written. The mutations of the semantics of the language represent possible misunderstandings of the description language and thus capture a different class of faults. Since the likely misunderstandings are highly context dependent, this context should be used to determine which semantic mutants should be produced. The approach is illustrated through examples with statecharts and C code. The paper also describes a semantic mutation testing tool for C and the results of experiments that investigated the nature of some semantic mutation operators for C
Assessment of C++ object-oriented mutation operators: A selective mutation approach
Mutation testing is an effective but costly testing technique. Several studies have observed that some mutants can be redundant and therefore removed without affecting its effectiveness. Similarly, some mutants may be more effective than others in guiding the tester on the creation of high‐quality test cases. On the basis of these findings, we present an assessment of C++ class mutation operators by classifying them into 2 rankings: the first ranking sorts the operators on the basis of their degree of redundancy and the second regarding the quality of the tests they help to design. Both rankings are used in a selective mutation study analysing the trade‐off between the reduction achieved and the effectiveness when using a subset of mutants. Experimental results consistently show that leveraging the operators at the top of the 2 rankings, which are different, lead to a significant reduction in the number of mutants with a minimum loss of effectiveness
Mutation Testing as a Safety Net for Test Code Refactoring
Refactoring is an activity that improves the internal structure of the code
without altering its external behavior. When performed on the production code,
the tests can be used to verify that the external behavior of the production
code is preserved. However, when the refactoring is performed on test code,
there is no safety net that assures that the external behavior of the test code
is preserved. In this paper, we propose to adopt mutation testing as a means to
verify if the behavior of the test code is preserved after refactoring.
Moreover, we also show how this approach can be used to identify the part of
the test code which is improperly refactored
An empirical comparison between direct and indirect test result checking approaches
The SOQUA 2006 Workshop was held in conjunction with the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT 2006/FSE-14) ACM Press, New York, NY.An oracle on software testing is a mechanism for checking whether the system under test has behaved correctly for any executions. In some situations, oracles are unavailable or too expensive to apply. This is known as the oracle problem. It is crucial to develop techniques to address it, and metamorphic testing (MT) was one of such proposals. This paper conducts a controlled experiment to investigate the cost effectiveness of using MT by 38 testers on three open-source programs. The fault detection capability and time cost of MT are compared with the popular assertion checking method. Our results show that MT is cost-efficient and has potentials for detecting more faults than the assertion checking method. Copyright 2006 ACM.preprintThis research is supported in part by a grant of the Research Grants
Council of Hong Kong (project no. HKU 7145/04E), a grant of City
University of Hong Kong and a grant of The University of Hong Kong
Redesigning the jMetal Multi-Objective Optimization Framework
jMetal, an open source, Java-based framework for multi-objective optimization with metaheuristics, has become a valuable tool for many researches in the area as well as for some industrial partners in the last ten years. Our experience using and maintaining it during that time, as well as the received comments and suggestions, have helped us improve the jMetal design and identify significant features to incorporate. This paper revisits the jMetal architecture, describing its refined new design, which relies on design patterns, principles from object-oriented design, and a better use of the Java language features to improve the quality of the code, without disregarding jMetal ever goals of simplicity, facility of use, flexibility, extensibility and portability. Among the newly incorporated features, jMetal supports live interaction with running algorithms and parallel execution of algorithms.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Evaluating Random Mutant Selection at Class-Level in Projects with Non-Adequate Test Suites
Mutation testing is a standard technique to evaluate the quality of a test
suite. Due to its computationally intensive nature, many approaches have been
proposed to make this technique feasible in real case scenarios. Among these
approaches, uniform random mutant selection has been demonstrated to be simple
and promising. However, works on this area analyze mutant samples at project
level mainly on projects with adequate test suites. In this paper, we fill this
lack of empirical validation by analyzing random mutant selection at class
level on projects with non-adequate test suites. First, we show that uniform
random mutant selection underachieves the expected results. Then, we propose a
new approach named weighted random mutant selection which generates more
representative mutant samples. Finally, we show that representative mutant
samples are larger for projects with high test adequacy.Comment: EASE 2016, Article 11 , 10 page
- …