4 research outputs found

    Many-objective test suite generation for software product lines

    Get PDF
    A Software Product Line (SPL) is a set of products built from a number of features, the set of valid products being defined by a feature model. Typically, it does not make sense to test all products defined by an SPL and one instead chooses a set of products to test (test selection) and, ideally, derives a good order in which to test them (test prioritisation). Since one cannot know in advance which products will reveal faults, test selection and prioritisation are normally based on objective functions that are known to relate to likely effectiveness or cost. This paper introduces a new technique, the grid-based evolution strategy (GrES), which considers several objective functions that assess a selection or prioritisation and aims to optimise on all of these. The problem is thus a many-objective optimisation problem. We use a new approach, in which all of the objective functions are considered but one (pairwise coverage) is seen as the most important. We also derive a novel evolution strategy based on domain knowledge. The results of the evaluation, on randomly generated and realistic feature models, were promising, with GrES outperforming previously proposed techniques and a range of many-objective optimisation algorithms

    Evaluating Testing Techniques in Highly-Configurable Systems: The Drupal Dataset

    Get PDF
    Context: Software applications exposing a high ability to be extended, changed or configured are usually referred to as Highly-Configurable Systems (HCSs). Testing techniques for HCSs aim at finding effective but manageable test suites that lead to the early detection of faults. Evaluating the effectiveness of these techniques in realistic environments is a must, but also a challenge due to the lack of HCSs with available code, configuration models and fault reports. Aim: In this chapter, we present the Drupal dataset, a collection of real-world data collected from the popular open-source Drupal framework. This dataset allows the assessment of variability testing techniques with real data of an HCS. Method: We collected extensive non-functional data from the Drupal Git repository and the Drupal website, including code changes in different versions of Drupal modules (e.g., 557 commits in Drupal v7.22) and number of tests and assertions in the modules (e.g., 352 and 24,152, respectively). The faults found in different versions of Drupal modules were also gathered from the Drupal bug tracking system (e.g., 3,392 faults in Drupal v7.23). Additionally, we provided the Drupal feature model as a representation of the framework configurability, with more than 2,000 millions of different Drupal configurations; one of the largest attributed feature models published so far. With 150 citations since its publication, the Drupal dataset has become a helpful tool to researchers and practitioners to conduct more realistic experiments and evaluations of HCSs.Ministerio de Ciencia, Innovación y Universidades Horatio RTI2018-101204-B-C21Junta de Andalucía APOLO (US-1264651)Junta de Andalucía EKIPMENTPLUS (P18–FR–2895

    How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance

    Full text link
    With modern requirements, there is an increasing tendency of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue -- how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto non-dominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multi-objective optimization could still be relatively new to SE/SBSE researchers, who may not be able to identify the right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multi-objective optimization problems may not be appropriate for specific SE problems, especially when the problem nature or decision maker's preferences are explicitly/implicitly available. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading use of evaluation methods. In this paper, we first carry out a systematic and critical review of quality evaluation for multi-objective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in seven repositories, and select 95 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators/methods and general situations in SBSE, which, together with the identified issues, enables us to codify a methodological guidance for selecting and using evaluation methods in different SBSE scenarios.Comment: This paper has been accepted by IEEE Transactions on Software Engineering, available as full OA: https://ieeexplore.ieee.org/document/925218
    corecore