915 research outputs found

    A Survey of Constrained Combinatorial Testing

    Get PDF
    Combinatorial Testing (CT) is a potentially powerful testing technique, whereas its failure revealing ability might be dramatically reduced if it fails to handle constraints in an adequate and efficient manner. To ensure the wider applicability of CT in the presence of constrained problem domains, large and diverse efforts have been invested towards the techniques and applications of constrained combinatorial testing. In this paper, we provide a comprehensive survey of representations, influences, and techniques that pertain to constraints in CT, covering 129 papers published between 1987 and 2018. This survey not only categorises the various constraint handling techniques, but also reviews comparatively less well-studied, yet potentially important, constraint identification and maintenance techniques. Since real-world programs are usually constrained, this survey can be of interest to researchers and practitioners who are looking to use and study constrained combinatorial testing techniques

    Flexible combinatorial interaction testing

    Get PDF

    GPU-based parallel computing methods for constructing covering arrays

    Get PDF
    As software systems becomes more complex, demand for e cient approaches to test these kind of systems with a lower cost is increased highly, too. One example of such applications can be given highly configurable software systems such as web servers (e.g. Apache) and databases (e.g. MySQL). They have many configurable options which interact each other and these option interactions lead having exponential growth of option configurations. Hence, these software systems become more prone to bugs which are caused by the interaction of options. A solution to this problem can be combinatorial interaction testing which systematically samples the configuration space and tests each of these samples, individually. Combinatorial interaction testing computes a small set of option configurations to be used as test suites, called covering arrays. A t-way covering array aims to cover all t-length option interactions of system under test with a minimum number of configurations where t is a small number in practical cases. Applications of covering arrays are especially encouraged after many researches empirically pointed out that substantial number of faults are caused by smaller value of option interaction. Nevertheless, computing covering arrays with a minimal number of configurations in a reasonable time is not easy task, especially when the configuration space is large and system has inter-option constraints that invalidate some configurations. Therefore, this study field attracts various researchers. Although most of approaches su er in scalability issue, many successful attempts have been also done to construct covering arrays. However, as the configuration scape gets larger, most of the approaches start to su er. Combinatorial problems e.g., in our case constructing covering arrays, are mainly solved by using e cient counting techniques. Based on this assumption, we conjecture that covering arrays can be computed using parallel algorithms e ciently since counting is an easy task which can be carried out with parallel programming strategies. Although di erent architectures are e ective in di erent researches, we choose to use GPU-based parallel computing techniques since GPUs have hundreds even sometimes thousands of cores however with small arithmetic logic units. Despite the fact that these cores are exceptionally constrained and limited, they serve our purpose very well since all we need to do is basic counting, repeatedly. We apply this idea in order to decrease the computation time on a meta-heuristic, search method simulated annealing, which is well studied in construction of covering arrays and, in general, gives the smallest size results in previous studies. Moreover, we present a technique to generate multiple neighbour states in each step of simulated annealing in parallel. Finally, we propose a novel hybrid approach using SAT solver with parallel computing techniques to decrease the negative e ect of pure random search and decrease the covering array size further. Our results prove our assumption that parallel computing is an e ective and e cient way to compute combinatorial objects

    Design, implementation, and validation of a benchmark generator for combinatorial interaction testing tools

    Full text link
    Combinatorial testing is a widely adopted technique for efficiently detecting faults in software. The quality of combinatorial test generators plays a crucial role in achieving effective test coverage. Evaluating combinatorial test generators remains a challenging task that requires diverse and representative benchmarks. Having such benchmarks might help developers to test their tools, and improve their performance. For this reason, in this paper, we present BenCIGen, a highly configurable generator of benchmarks to be used by combinatorial test generators, empowering users to customize the type of benchmarks generated, including constraints and parameters, as well as their complexity. An initial version of such a tool has been used during the CT-Competition, held yearly during the International Workshop on Combinatorial Testing. This paper describes the requirements, the design, the implementation, and the validation of BenCIGen. Tests for the validation of BenCIGen are derived from its requirements by using a combinatorial interaction approach. Moreover, we demonstrate the tool's ability to generate benchmarks that reflect the characteristics of real software systems. BenCIGen not only facilitates the evaluation of existing generators but also serves as a valuable resource for researchers and practitioners seeking to enhance the quality and effectiveness of combinatorial testing methodologies

    Artificial table testing dynamically adaptive systems

    Get PDF
    Dynamically Adaptive Systems (DAS) are systems that modify their behavior and structure in response to changes in their surrounding environment. Critical mission systems increasingly incorporate adaptation and response to the environment; examples include disaster relief and space exploration systems. These systems can be decomposed in two parts: the adaptation policy that specifies how the system must react according to the environmental changes and the set of possible variants to reconfigure the system. A major challenge for testing these systems is the combinatorial explosions of variants and envi-ronment conditions to which the system must react. In this paper we focus on testing the adaption policy and propose a strategy for the selection of envi-ronmental variations that can reveal faults in the policy. Artificial Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing (STT), a technique widely used in civil engineering to evaluate building's structural re-sistance to seismic events. ASTT makes use of artificial earthquakes that simu-late violent changes in the environmental conditions and stresses the system adaptation capability. We model the generation of artificial earthquakes as a search problem in which the goal is to optimize different types of envi-ronmental variations

    An approach for choosing the best covering array constructor to use

    Get PDF
    Covering arrays have been extensively used for software testing. Therefore, many covering array constructors have been developed. However, each constructor comes with its own pros and cons. That is, the best constructor to use typically depends on the specific application scenario at hand. To improve both the efficiency and effectiveness of covering arrays, we, in this work, present a classification-based approach to predict the "best'" covering array constructor to use for a given configuration space model, coverage strength, and optimization criterion, i.e., minimizing the construction time or the covering array size. We also empirically evaluate the proposed approach by using a relatively small, yet quite realistic space of application scenarios. The approach predicted the best constructors for reducing the construction times with an accuracy of 86% and the best constructors for reducing the covering array sizes with an accuracy 90%. When two predictions were made, rather than one, the accuracy of correctly predicting the best constructors increased to 94% and 98%, respectively
    corecore