9,617 research outputs found

    An integrated search-based approach for automatic testing from extended finite state machine (EFSM) models

    Get PDF
    This is the post-print version of the Article - Copyright @ 2011 ElsevierThe extended finite state machine (EFSM) is a modelling approach that has been used to represent a wide range of systems. When testing from an EFSM, it is normal to use a test criterion such as transition coverage. Such test criteria are often expressed in terms of transition paths (TPs) through an EFSM. Despite the popularity of EFSMs, testing from an EFSM is difficult for two main reasons: path feasibility and path input sequence generation. The path feasibility problem concerns generating paths that are feasible whereas the path input sequence generation problem is to find an input sequence that can traverse a feasible path. While search-based approaches have been used in test automation, there has been relatively little work that uses them when testing from an EFSM. In this paper, we propose an integrated search-based approach to automate testing from an EFSM. The approach has two phases, the aim of the first phase being to produce a feasible TP (FTP) while the second phase searches for an input sequence to trigger this TP. The first phase uses a Genetic Algorithm whose fitness function is a TP feasibility metric based on dataflow dependence. The second phase uses a Genetic Algorithm whose fitness function is based on a combination of a branch distance function and approach level. Experimental results using five EFSMs found the first phase to be effective in generating FTPs with a success rate of approximately 96.6%. Furthermore, the proposed input sequence generator could trigger all the generated feasible TPs (success rate = 100%). The results derived from the experiment demonstrate that the proposed approach is effective in automating testing from an EFSM

    A Product Line Systems Engineering Process for Variability Identification and Reduction

    Full text link
    Software Product Line Engineering has attracted attention in the last two decades due to its promising capabilities to reduce costs and time to market through reuse of requirements and components. In practice, developing system level product lines in a large-scale company is not an easy task as there may be thousands of variants and multiple disciplines involved. The manual reuse of legacy system models at domain engineering to build reusable system libraries and configurations of variants to derive target products can be infeasible. To tackle this challenge, a Product Line Systems Engineering process is proposed. Specifically, the process extends research in the System Orthogonal Variability Model to support hierarchical variability modeling with formal definitions; utilizes Systems Engineering concepts and legacy system models to build the hierarchy for the variability model and to identify essential relations between variants; and finally, analyzes the identified relations to reduce the number of variation points. The process, which is automated by computational algorithms, is demonstrated through an illustrative example on generalized Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of the process in the reduction of variation points, it is further applied to case studies in different engineering domains at different levels of complexity. Subject to system model availability, reduction of 14% to 40% in the number of variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal on 3rd June 201

    SAT Solving for Argument Filterings

    Full text link
    This paper introduces a propositional encoding for lexicographic path orders in connection with dependency pairs. This facilitates the application of SAT solvers for termination analysis of term rewrite systems based on the dependency pair method. We address two main inter-related issues and encode them as satisfiability problems of propositional formulas that can be efficiently handled by SAT solving: (1) the combined search for a lexicographic path order together with an \emph{argument filtering} to orient a set of inequalities; and (2) how the choice of the argument filtering influences the set of inequalities that have to be oriented. We have implemented our contributions in the termination prover AProVE. Extensive experiments show that by our encoding and the application of SAT solvers one obtains speedups in orders of magnitude as well as increased termination proving power

    Estimating the feasibility of transition paths in extended finite state machines

    Get PDF
    There has been significant interest in automating testing on the basis of an extended finite state machine (EFSM) model of the required behaviour of the implementation under test (IUT). Many test criteria require that certain parts of the EFSM are executed. For example, we may want to execute every transition of the EFSM. In order to find a test suite (set of input sequences) that achieves this we might first derive a set of paths through the EFSM that satisfy the criterion using, for example, algorithms from graph theory. We then attempt to produce input sequences that trigger these paths. Unfortunately, however, the EFSM might have infeasible paths and the problem of determining whether a path is feasible is generally undecidable. This paper describes an approach in which a fitness function is used to estimate how easy it is to find an input sequence to trigger a given path through an EFSM. Such a fitness function could be used in a search-based approach in which we search for a path with good fitness that achieves a test objective, such as executing a particular transition, and then search for an input sequence that triggers the path. If this second search fails then we search for another path with good fitness and repeat the process. We give a computationally inexpensive approach (fitness function) that estimates the feasibility of a path. In order to evaluate this fitness function we compared the fitness of a path with the ease with which an input sequence can be produced using search to trigger the path and we used random sampling in order to estimate this. The empirical evidence suggests that a reasonably good correlation (0.72 and 0.62) exists between the fitness of a path, produced using the proposed fitness function, and an estimate of the ease with which we can randomly generate an input sequence to trigger the path

    Polytool: polynomial interpretations as a basis for termination analysis of Logic programs

    Full text link
    Our goal is to study the feasibility of porting termination analysis techniques developed for one programming paradigm to another paradigm. In this paper, we show how to adapt termination analysis techniques based on polynomial interpretations - very well known in the context of term rewrite systems (TRSs) - to obtain new (non-transformational) ter- mination analysis techniques for definite logic programs (LPs). This leads to an approach that can be seen as a direct generalization of the traditional techniques in termination analysis of LPs, where linear norms and level mappings are used. Our extension general- izes these to arbitrary polynomials. We extend a number of standard concepts and results on termination analysis to the context of polynomial interpretations. We also propose a constraint-based approach for automatically generating polynomial interpretations that satisfy the termination conditions. Based on this approach, we implemented a new tool, called Polytool, for automatic termination analysis of LPs

    The Impact of Systematic Edits in History Slicing

    Full text link
    While extracting a subset of a commit history, specifying the necessary portion is a time-consuming task for developers. Several commit-based history slicing techniques have been proposed to identify dependencies between commits and to extract a related set of commits using a specific commit as a slicing criterion. However, the resulting subset of commits become large if commits for systematic edits whose changes do not depend on each other exist. We empirically investigated the impact of systematic edits on history slicing. In this study, commits in which systematic edits were detected are split between each file so that unnecessary dependencies between commits are eliminated. In several histories of open source systems, the size of history slices was reduced by 13.3-57.2% on average after splitting the commits for systematic edits.Comment: 5 pages, MSR 201

    Termination Proofs in the Dependency Pair Framework May Induce Multiple Recursive Derivational Complexity

    Get PDF
    We study the derivational complexity of rewrite systems whose termination is provable in the dependency pair framework using the processors for reduction pairs, dependency graphs, or the subterm criterion. We show that the derivational complexity of such systems is bounded by a multiple recursive function, provided the derivational complexity induced by the employed base techniques is at most multiple recursive. Moreover we show that this upper bound is tight.Comment: 22 pages, extended conference versio
    • …
    corecore