3,358 research outputs found

    Feedback driven adaptive combinatorial testing

    Get PDF
    The configuration spaces of modern software systems are too large to test exhaustively. Combinatorial interaction testing (CIT) approaches, such as covering arrays, systematically sample the configuration space and test only the selected configurations. The basic justification for CIT approaches is that they can cost-effectively exercise all system behaviors caused by the settings of t or fewer options. We conjecture, however, that in practice many such behaviors are not actually tested because of masking effects – failures that perturb execution so as to prevent some behaviors from being exercised. In this work we present a feedback-driven, adaptive, combinatorial testing approach aimed at detecting and working around masking effects. At each iteration we detect potential masking effects, heuristically isolate their likely causes, and then generate new covering arrays that allow previously masked combinations to be tested in the subsequent iteration. We empirically assess the effectiveness of the proposed approach on two large widely used open source software systems. Our results suggest that masking effects do exist and that our approach provides a promising and efficient way to work around them

    Combinatorial Interaction Testing for Automated Constraint Repair

    Get PDF
    Highly-configurable software systems can be easily adapted to address user’s needs. Modelling parameter configurations and their relationships can facilitate software reuse. Combinatorial Interaction Testing (CIT) methods are already often used to drive systematic testing of software system configurations. However, a model of the system’s configurations not conforming with respect to its software implementation, must be repaired in order to restore conformance. In this paper we extend CIT by devising a new search-based technique able to repair a model composed of a set of constraints among the various software system’s parameters. Our technique can be used to detect and fix faults both in the model and in the real software system. Experiments for five real-world systems show that our approach can repair on average 37% of conformance faults. Moreover, we also show it can infer parameter constraints in a large real-world software system, hence it can be used for automated creation of CIT models

    Swarm testing

    Get PDF
    ManuscriptSwarm testing is a novel and inexpensive way to improve the diversity of test cases generated during random testing. Increased diversity leads to improved coverage and fault detection. In swarm testing, the usual practice of potentially including all features in every test case is abandoned. Rather, a large "swarm" of randomly generated configurations, each of which omits some features, is used, with configurations receiving equal resources. We have identified two mechanisms by which feature omission leads to better exploration of a system's state space. First, some features actively prevent the system from executing interesting behaviors; e.g., "pop" calls may prevent a stack data structure from executing a bug in its overflow detection logic. Second, even when there is no active suppression of behaviors, test features compete for space in each test, limiting the depth to which logic driven by features can be explored. Experimental results show that swarm testing increases coverage and can improve fault detection dramatically; for example, in a week of testing it found 42% more distinct ways to crash a collection of C compilers than did the heavily hand-tuned default configuration of a random tester

    Testing variability-intensive systems using automated analysis: an application to Android

    Get PDF
    Software product lines are used to develop a set of software products that, while being different, share a common set of features. Feature models are used as a compact representation of all the products (e.g., possible configurations) of the product line. The number of products that a feature model encodes may grow exponentially with the number of features. This increases the cost of testing the products within a product line. Some proposals deal with this problem by reducing the testing space using different techniques. However, a daunting challenge is to explore how the cost and value of test cases can be modeled and optimized in order to have lower-cost testing processes. In this paper, we present TESting vAriAbiLity Intensive Systems (TESALIA), an approach that uses automated analysis of feature models to optimize the testing of variability-intensive systems. We model test value and cost as feature attributes, and then we use a constraint satisfaction solver to prune, prioritize and package product line tests complementing prior work in the software product line testing literature. A prototype implementation of TESALIA is used for validation in an Android example showing the benefits of maximizing the mobile market share (the value function) while meeting a budgetary constraint.Ministerio de EconomĂ­a y Competitividad TIN2012-32273Junta de AndalucĂ­a TIC-5906Junta de AndalucĂ­a TIC-186

    Surrogate time series

    Full text link
    Before we apply nonlinear techniques, for example those inspired by chaos theory, to dynamical phenomena occurring in nature, it is necessary to first ask if the use of such advanced techniques is justified "by the data". While many processes in nature seem very unlikely a priori to be linear, the possible nonlinear nature might not be evident in specific aspects of their dynamics. The method of surrogate data has become a very popular tool to address such a question. However, while it was meant to provide a statistically rigorous, foolproof framework, some limitations and caveats have shown up in its practical use. In this paper, recent efforts to understand the caveats, avoid the pitfalls, and to overcome some of the limitations, are reviewed and augmented by new material. In particular, we will discuss specific as well as more general approaches to constrained randomisation, providing a full range of examples. New algorithms will be introduced for unevenly sampled and multivariate data and for surrogate spike trains. The main limitation, which lies in the interpretability of the test results, will be illustrated through instructive case studies. We will also discuss some implementational aspects of the realisation of these methods in the TISEAN (http://www.mpipks-dresden.mpg.de/~tisean) software package.Comment: 28 pages, 23 figures, software at http://www.mpipks-dresden.mpg.de/~tisea

    A systematic review on cloud testing

    Get PDF
    A systematic literature review is presented that surveyed the topic of cloud testing over the period (2012-2017). Cloud testing can refer either to testing cloud-based systems (testing of the cloud), or to leveraging the cloud for testing purposes (testing in the cloud): both approaches (and their combination into testing of the cloud in the cloud) have drawn research interest. An extensive paper search was conducted by both automated query of popular digital libraries and snowballing, which resulted into the final selection of 147 primary studies. Along the survey a framework has been incrementally derived that classifies cloud testing research along six main areas and their topics. The paper includes a detailed analysis of the selected primary studies to identify trends and gaps, as well as an extensive report of the state of art as it emerges by answering the identified Research Questions. We find that cloud testing is an active research field, although not all topics have received so far enough attention, and conclude by presenting the most relevant open research challenges for each area of the classification framework.This paper describes research work mostly undertaken in the context of the European Project H2020 731535: ElasTest. This work has also been partially supported by: the Italian MIUR PRIN 2015 Project: GAUSS; the Regional Government of Madrid (CM) under project Cloud4BigData (S2013/ICE-2894) cofunded by FSE & FEDER; and the Spanish Government under project LERNIM (RTC-2016-4674-7) cofunded by the Ministry of Economy and Competitiveness, FEDER & AEI

    Optimal Minimisation of Pairwise-covering Test Configurations Using Constraint Programming

    Get PDF
    International audienceContext: Testing highly-configurable software systems is challenging due to a large number of test configurations that have to be carefully selected in order to reduce the testing effort as much as possible, while maintaining high software quality. Finding the smallest set of valid test configurations that ensure sufficient coverage of the system's feature interactions is thus the objective of validation engineers, especially when the execution of test configurations is costly or time-consuming. However, this problem is NP-hard in general and approximation algorithms have often been used to address it in practice. Objective: In this paper, we explore an alternative approach based on constraint programming that will allow engineers to increase the effectiveness of configuration testing while keeping the number of configurations as low as possible. Method: Our approach consists in using a (time-aware) minimisation algorithm based on constraint programming. Given the amount of time, our solution generates a minimised set of valid test configurations that ensure coverage of all pairs of feature values (a.k.a. pairwise coverage). The approach has been implemented in a tool called PACOGEN. Results: PACOGEN was evaluated on 224 feature models in comparison with the two existing tools that are based on a greedy algorithm. For 79% of 224 feature models, PACOGEN generated up to 60% fewer test configurations than the competitor tools. We further evaluated PACOGEN in the case study of large industrial highly-configurable video conferencing software with a feature model of 169 features, and found 60% fewer configurations compared with the manual approach followed by test engineers. The set of test configurations generated by PACOGEN decreased the time required by test engineers in manual test configuration by 85%, increasing the feature-pairs coverage at the same time. Conclusion: Extensive evaluation concluded that optimal minimisation of pairwise-covering test configurations is efficiently addressed using constraint programming techniques

    Compressed Genotyping

    Full text link
    Significant volumes of knowledge have been accumulated in recent years linking subtle genetic variations to a wide variety of medical disorders from Cystic Fibrosis to mental retardation. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, largely due to the relatively tedious and expensive process of DNA sequencing. Since the genetic polymorphisms that underlie these disorders are relatively rare in the human population, the presence or absence of a disease-linked polymorphism can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies, and assembled a mathematical framework that has some important distinctions from 'traditional' compressed sensing ideas in order to address different biological and technical constraints.Comment: Submitted to IEEE Transaction on Information Theory - Special Issue on Molecular Biology and Neuroscienc
    • 

    corecore