5 research outputs found

    Search-based Similarity-driven Behavioural SPL Testing

    Get PDF
    International audienceDissimilar test cases have been proven to be effective to reveal faults in software systems. In the Software Product Line (SPL) context, this criterion has been applied successfully to mimic combinatorial interaction testing in an efficient and scalable manner by selecting and prioritising most dissimilar configurations of feature models using evolutionary algorithms. In this paper, we extend dissimilarity to behavioural SPL models (FTS) in a search-based approach, and evaluate its effectiveness in terms of product and fault coverage. We investigate different distances as well as as single-objective algorithms, (dissimilarity on actions, random , all-actions). Our results on four case studies show the relevance of dissimilarity-based test generation for be-havioural SPL models, especially on the largest case-study where no other approach can match it

    Featured Model-based Mutation Analysis

    Get PDF
    International audienceModel-based mutation analysis is a powerful but expensive testing technique. We tackle its high computation cost by proposing an optimization technique that drastically speeds up the mutant execution process. Central to this approach is the Featured Mutant Model, a modelling framework for mutation analysis inspired by the software product line paradigm. It uses behavioural variability models, viz., Featured Transition Systems, which enable the optimized generation, configuration and execution of mutants. We provide results, based on models with thousands of transitions, suggesting that our technique is fast and scalable. We found that it outperforms previous approaches by several orders of magnitude and that it makes higher-order mutation practically applicable

    A test automation language framework for behavioral models

    No full text
    Abstract—Model-based testers design tests in terms of models, such as paths in graphs. This results in abstract tests, which tests use names and events that exist in the model, but not the tests, so testers write the same redundant code many times. However, many existing model-based testing techniques are very complicated to use in practice, especially in agile software tests to concrete tests by hand. This is time-consuming, labor-intensive, and error-prone. This paper presents a language to automate the creation of mappings from abstract tests to concrete tests. Three issues are addressed: (1) creating mappings and generating test values, (2) transforming graphs and using coverage criteria to generate test paths, and (3) solving constraints and generating concrete tests. Based on the language, we developed a test automation language framework. The paper also presents results from an empirical comparison of testers using the framework with manual mapping on 11 open source and 6 example programs. We found that the automated test generation method took 29.6 % of the time the manual method took on average, and the manual tests contained 48 errors in which concrete tests did not match their abstract tests while the automatic tests had zero errors. I
    corecore