3,699 research outputs found

    An empirical study of computation equivalence as determined by decomposition slice equivalence.

    Get PDF
    In order to further understand and assess decomposition slicing we characterize and evaluate the size of reductions obtained by computing equivalent decomposition slices from the perspective of the comprehender, maintainer, tester and researcher. The analysis was performed on 68 C language systems of sizes 100 to 50,000 lines. All decomposition slices were computed and compared for simple equality. From this data, we were able to determine with 95% confidence that the true mean percentage of equivalent decomposition slices is between 50.0% and 60.3%, with a p-value < 0.005. This has clear and significant impact for software testing, as any coverage method used for one of the variables used in an equivalence will apply to all variables in the class; for software comprehension as the number of items (variables) used for the understander is substantially reduced; for the software maintenance, as the number computational relationships is reduced; and for the researcher, in attempting to ascertain the underlying cause of this phenomena

    Stop-list slicing.

    Get PDF
    Traditional program slicing requires two parameters: a program location and a variable, or perhaps a set of variables, of interest. Stop-list slicing adds a third parameter to the slicing criterion: those variables that are not of interest. This third parameter is called the stoplist. When a variable in the stop-list is encountered, the data-flow dependence analysis of slicing is terminated for that variable. Stop-list slicing further focuses on the computation of interest, while ignoring computations known or determined to be uninteresting. This has the potential to reduce slice size when compared to traditional forms of slicing. In order to assess the size of the reduction obtained via stop-list slicing, the paper reports the results of three empirical evaluations: a large scale empirical study into the maximum slice size reduction that can be achieved when all program variables are on the stop-list; a study on a real program, to determine the reductions that could be obtained in a typical application; and qualitative case-based studies to illustrate stop-list slicing in the small. The large-scale study concerned a suite of 42 programs of approximately 800KLoc in total. Over 600K slices were computed. Using the maximal stoplist reduced the size of the computed slices by about one third on average. The typical program showed a slice size reduction of about one-quarter. The casebased studies indicate that the comprehension effects are worth further consideration

    Reducing regression test size by exclusion.

    Get PDF
    Operational software is constantly evolving. Regression testing is used to identify the unintended consequences of evolutionary changes. As most changes affect only a small proportion of the system, the challenge is to ensure that the regression test set is both safe (all relevant tests are used) and unclusive (only relevant tests are used). Previous approaches to reducing test sets struggle to find safe and inclusive tests by looking only at the changed code. We use decomposition program slicing to safely reduce the size of regression test sets by identifying those parts of a system that could not have been affected by a change; this information will then direct the selection of regression tests by eliminating tests that are not relevant to the change. The technique properly accounts for additions and deletions of code. We extend and use Rothermel and Harrold’s framework for measuring the safety of regression test sets and introduce new safety and precision measures that do not require a priori knowledge of the exact number of modification-revealing tests. We then analytically evaluate and compare our techniques for producing reduced regression test sets

    Reducing regression test size by exclusion.

    Get PDF
    Operational software is constantly evolving. Regression testing is used to identify the unintended consequences of evolutionary changes. As most changes affect only a small proportion of the system, the challenge is to ensure that the regression test set is both safe (all relevant tests are used) and unclusive (only relevant tests are used). Previous approaches to reducing test sets struggle to find safe and inclusive tests by looking only at the changed code. We use decomposition program slicing to safely reduce the size of regression test sets by identifying those parts of a system that could not have been affected by a change; this information will then direct the selection of regression tests by eliminating tests that are not relevant to the change. The technique properly accounts for additions and deletions of code. We extend and use Rothermel and Harrold’s framework for measuring the safety of regression test sets and introduce new safety and precision measures that do not require a priori knowledge of the exact number of modification-revealing tests. We then analytically evaluate and compare our techniques for producing reduced regression test sets

    Pattern backtracking algorithm for the workflow satisfiability problem with user-independent constraints

    Get PDF
    The workflow satisfiability problem (WSP) asks whether there exists an assignment of authorised users to the steps in a workflow specification, subject to certain constraints on the assignment. (Such an assignment is called valid.) The problem is NP-hard even when restricted to the large class of user-independent constraints. Since the number of steps k is relatively small in practice, it is natural to consider a parametrisation of the WSP by k. We propose a new fixed-parameter algorithm to solve the WSP with user-independent constraints. The assignments in our method are partitioned into equivalence classes such that the number of classes is exponential in k only. We show that one can decide, in polynomial time, whether there is a valid assignment in an equivalence class. By exploiting this property, our algorithm reduces the search space to the space of equivalence classes, which it browses within a backtracking framework, hence emerging as an efficient yet relatively simple-to-implement or generalise solution method. We empirically evaluate our algorithm against the state-of-the-art methods and show that it clearly wins the competition on the whole range of our test problems and significantly extends the domain of practically solvable instances of the WSP

    Answer Set Programming Modulo `Space-Time'

    Full text link
    We present ASP Modulo `Space-Time', a declarative representational and computational framework to perform commonsense reasoning about regions with both spatial and temporal components. Supported are capabilities for mixed qualitative-quantitative reasoning, consistency checking, and inferring compositions of space-time relations; these capabilities combine and synergise for applications in a range of AI application areas where the processing and interpretation of spatio-temporal data is crucial. The framework and resulting system is the only general KR-based method for declaratively reasoning about the dynamics of `space-time' regions as first-class objects. We present an empirical evaluation (with scalability and robustness results), and include diverse application examples involving interpretation and control tasks

    Introducing pattern graph rewriting in novel spatial aggregation procedures for a class of traffic assignment models

    Get PDF
    In this study two novel spatial aggregation methods are presented compatible with a class of traffic assignment models. Both methods are formalized using a category theoretical approach. While this type of formalization is new to the field of transport, it is well known in other fields that require tools to allow for reasoning on complex structures. The method presented stems from a method originally developed to deal with quantum physical processes. The first benefit of adopting this formalization technique is that it provides an intuitive graphical representation while having a rigorous mathematical underpinning. Secondly, it bears close resemblances to regular expressions and functional programming techniques giving insights in how to potentially construct solvers (i.e. algorithms). The aggregation methods proposed in this paper are compatible with traffic assignment procedures utilising a path travel time function consisting out of two components, namely (i) a flow invariant component representing free flow travel time, and (ii) a flow dependent component representing queuing delays. By exploiting the fact that, in practice, most large scale networks only have a small portion of the network exhibiting queuing delays, this method aims at decomposing the network into a constant free flowing part to compute once and a, much smaller, demand varying delay part that requires recomputation across demand scenarios. It is demonstrated that under certain conditions this procedure is lossless. On top of the decomposition method, a path set reduction method is proposed. This method reduces the path set to the minimal path set which further decreases computational cost. A large scale case study is presented to demonstrate the proposed methods can reduce computation times to less than 5% of the original without loss of accuracy
    • …
    corecore