22 research outputs found

    Coverage and adequacy in software product line testing

    Full text link

    Towards Statistical Prioritization for Software Product Lines Testing

    Get PDF
    Software Product Lines (SPL) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort/generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling/prioritization techniques barely take product behavior into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behavior. We discuss possible scenarios and give a prioritization procedure illustrated on an example.Comment: Extended version published at VaMoS '14 (http://dx.doi.org/10.1145/2556624.2556635

    Software product line testing - a systematic mapping study

    Get PDF
    Context: Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped. Objective: This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research. Method: A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type. Results: A majority of the papers are of proposal research types (64 %). System testing is the largest group with respect to research focus (40%), followed by management (23%). Method contributions are in majority. Conclusions: More validation and evaluation research is needed to provide a better foundation for SPL testing

    Artificial table testing dynamically adaptive systems

    Get PDF
    Dynamically Adaptive Systems (DAS) are systems that modify their behavior and structure in response to changes in their surrounding environment. Critical mission systems increasingly incorporate adaptation and response to the environment; examples include disaster relief and space exploration systems. These systems can be decomposed in two parts: the adaptation policy that specifies how the system must react according to the environmental changes and the set of possible variants to reconfigure the system. A major challenge for testing these systems is the combinatorial explosions of variants and envi-ronment conditions to which the system must react. In this paper we focus on testing the adaption policy and propose a strategy for the selection of envi-ronmental variations that can reveal faults in the policy. Artificial Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing (STT), a technique widely used in civil engineering to evaluate building's structural re-sistance to seismic events. ASTT makes use of artificial earthquakes that simu-late violent changes in the environmental conditions and stresses the system adaptation capability. We model the generation of artificial earthquakes as a search problem in which the goal is to optimize different types of envi-ronmental variations

    Testing software product lines using incremental test generation,” in ISSRE

    Get PDF
    Abstract We present a novel specification-based approach for generating tests for products in a software product line

    Testing variability-intensive systems using automated analysis: an application to Android

    Get PDF
    Software product lines are used to develop a set of software products that, while being different, share a common set of features. Feature models are used as a compact representation of all the products (e.g., possible configurations) of the product line. The number of products that a feature model encodes may grow exponentially with the number of features. This increases the cost of testing the products within a product line. Some proposals deal with this problem by reducing the testing space using different techniques. However, a daunting challenge is to explore how the cost and value of test cases can be modeled and optimized in order to have lower-cost testing processes. In this paper, we present TESting vAriAbiLity Intensive Systems (TESALIA), an approach that uses automated analysis of feature models to optimize the testing of variability-intensive systems. We model test value and cost as feature attributes, and then we use a constraint satisfaction solver to prune, prioritize and package product line tests complementing prior work in the software product line testing literature. A prototype implementation of TESALIA is used for validation in an Android example showing the benefits of maximizing the mobile market share (the value function) while meeting a budgetary constraint.Ministerio de EconomĂ­a y Competitividad TIN2012-32273Junta de AndalucĂ­a TIC-5906Junta de AndalucĂ­a TIC-186

    Exploring regression testing and software product line testing - research and state of practice

    Get PDF
    In large software organizations with a product line development approach a selective testing of product variants is necessary in order to keep pace with the decreased development time for new products, enabled by the systematic reuse. The close relationship between products in product line indicates an option to reduce the testing effort due to redundancy. In many cases test selection is performed manually, based on test leaders’ expertise. This makes the cost and quality of the testing highly dependent on the skills and experience of the test leaders. There is a need in industry for systematic approaches to test selection. The goal of our research is to improve the control of the testing and reduce the amount of redundant testing in the product line context by applying regression test selection strategies. In this thesis, the state of art of regression testing and software product line testing are explored. Two extensive systematic reviews are conducted as well as an industrial survey of regression testing state of practice and an industrial evaluation of a pragmatic regression test selection strategy. Regression testing is not an isolated one-off activity, but rather an activity of varying scope and preconditions, strongly dependent on the context in which it is applied. Several techniques for regression test selection are proposed and evaluated empirically but in many cases the context is too specific for a technique to be easily applied directly by software developers. In order to improve the possibility for generalizing empirical results on regression test selection, guidelines for reporting the testing context are discussed in this thesis. Software product line testing is a relatively new research area. The understanding about challenges is well established but when looking for solutions to these challenges, we mostly find proposals, and empirical evaluations are sparse. Regression test selection strategies proposed in literature are not easily applicable in the product line context. Instead, control may be increased by increased visibility of the effects of testing and proper measurements of software quality. Focus of our future work will be on how to guide the planning and assessment of regression testing activities in large, complex reuse based systems, by visualizing the quality achieved in different parts of the system and evaluating the effects of different selection strategies when applied in various regression testing situations

    Effective and scalable software compatibility testing

    Full text link
    Today’s software systems are typically composed of multiple components, each with different versions. Software compat-ibility testing is a quality assurance task aimed at ensuring that multi-component based systems build and/or execute correctly across all their versions ’ combinations, or config-urations. Because there are complex and changing interde-pendencies between components and their versions, and be-cause there are such a large number of configurations, it is generally infeasible to test all potential configurations. Con-sequently, in practice, compatibility testing examines only a handful of default or popular configurations to detect prob-lems; as a result costly errors can and do escape to the field. This paper presents a new approach to compatibility test-ing, called Rachet. We formally model the entire config-uration space for software systems and use the model to generate test plans to sample a portion of the space. In this paper, we test all direct dependencies between components and execute the test plan efficiently in parallel. We present empirical results obtained by applying our approach to two large-scale scientific middleware systems. The results show that for these systems Rachet scaled well and discovered in-compatibilities between components, and that testing only direct dependences did not compromise test quality
    corecore