489 research outputs found

    A Survey of Constrained Combinatorial Testing

    Get PDF
    Combinatorial Testing (CT) is a potentially powerful testing technique, whereas its failure revealing ability might be dramatically reduced if it fails to handle constraints in an adequate and efficient manner. To ensure the wider applicability of CT in the presence of constrained problem domains, large and diverse efforts have been invested towards the techniques and applications of constrained combinatorial testing. In this paper, we provide a comprehensive survey of representations, influences, and techniques that pertain to constraints in CT, covering 129 papers published between 1987 and 2018. This survey not only categorises the various constraint handling techniques, but also reviews comparatively less well-studied, yet potentially important, constraint identification and maintenance techniques. Since real-world programs are usually constrained, this survey can be of interest to researchers and practitioners who are looking to use and study constrained combinatorial testing techniques

    Comparative Analysis of Constraint Handling Techniques for Constrained Combinatorial Testing

    Get PDF
    Constraints depict the dependency relationships between parameters in a software system under test. Because almost all systems are constrained in some way, techniques that adequately cater for constraints have become a crucial factor for adoption, deployment and exploitation of Combinatorial Testing (CT). Currently, despite a variety of different constraint handling techniques available, the relationship between these techniques and the generation algorithms that use them remains unknown, yielding an important gap and pressing concern in the literature of constrained combination testing. In this paper, we present a comparative empirical study to investigate the impact of four common constraint handling techniques on the performance of six representative (greedy and search-based) test suite generation algorithms. The results reveal that the Verify technique implemented with the Minimal Forbidden Tuple (MFT) approach is the fastest, while the Replace technique is promising for producing the smallest constrained covering arrays, especially for algorithms that construct test cases one-at-a-time. The results also show that there is an interplay between effectiveness of the constraint handler and the test suite generation algorithm into which it is developed

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    Strategies for optimal design of biomagnetic sensor systems

    Get PDF
    Magnetic field imaging (MFI) is a technique to record contact free the magnetic field distribution and estimate the underlying source distribution in the heart. Currently, the cardiomagnetic fields are recorded with superconducting quantum interference devices (SQUIDs), which are restricted to the inside of a cryostat filled with liquid helium or nitrogen. New room temperature optical magnetometers allow less restrictive sensor positioning, which raises the question of how to optimally place the sensors for robust field reconstruction. The objective in this study is to develop a generic object-oriented framework for optimizing sensor arrangements (sensor positions and orientations) which supports the necessary constraints of a limited search volume (only outside the body) and the technical minimum distance of sensors (e.g. 1 cm). In order to test the framework, a new quasi-continuous particle swarm optimizer (PSO) component is developed as well as an exemplary goal function component using the condition number (CN) of the leadfield matrix. Generic constraint handling algorithms are designed and implemented, that decompose complex constraints into basic ones. The constraint components interface to an operational exemplary optimization strategy which is validated on the magnetocardiographic sensor arrangement problem. The simulation setup includes a three compartment boundary element model of a torso with a fitted multi-dipole heart model. The results show that the CN, representing the reconstruction robustness of the inverse problem, can be reduced with our optimization by one order of magnitude within a sensor plane (the cryostat bottom) in front of the torso compared to a regular sensor grid. Reduction of another order of magnitude is achieved by optimizing sensor positions on the entire torso surface. Results also indicate that the number of sensors may be reduced to 20-30 without loss of robustness in terms of CN. The original contributions are the generic reusable framework and exemplary components, the quasicontinuous PSO algorithm with constraint support and the composite constraint handling algorithms

    Minimal input support problem and algorithms to solve it

    Get PDF

    Parameter-less Late Acceptance Hill-climbing: Foundations & Applications.

    Get PDF
    PhD Theses.Stochastic Local Search (SLS) methods have been used to solve complex hard combinatorial problems in a number of elds. Their judicious use of randomization, arguably, simpli es their design to achieve robust algorithm behaviour in domains where little is known. This feature makes them a general purpose approach for tackling complex problems. However, their performance, usually, depends on a number of parameters that should be speci ed by the user. Most of these parameters are search-algorithm related and have little to do with the user's problem. This thesis presents search techniques for combinatorial problems that have fewer parameters while delivering good anytime performance. Their parameters are set automatically by the algorithm itself in an intelligent way, while making sure that they use the entire given time budget to explore the search space with a high probability of avoiding the stagnation in a single basin of attraction. These algorithms are suitable for general practitioners in industry that do not have deep insight into search methodologies and their parameter tuning. Note that, to all intents and purposes, in realworld search problems the aim is to nd a good enough quality solution in a pre-de ned time. In order to achieve this, we use a technique that was originally introduced for automating population sizing in evolutionary algorithms. In an intelligent way, we adapted it to a particular one-point stochastic local search algorithm, namely Late Acceptance Hill-Climbing (LAHC), to eliminate the need to manually specify the value of the sole parameter of this algorithm. We then develop a mathematically sound dynamic cuto time strategy that is able to reliably detect the stagnation point for these search algorithms. We evaluated the suitability and scalability of the proposed methods on a range of classical combinatorial optimization problems as well as a real-world software engineering proble
    • …
    corecore