24,187 research outputs found

    Unions of slices are not slices

    Get PDF
    Many approaches to slicing rely upon the 'fact' that the union of two static slices is a valid slice. It is known that static slices constructed using program dependence graph algorithms are valid slices (Reps and Yang, 1988). However, this is not true for other forms of slicing. For example, it has been established that the union of two dynamic slices is not necessarily a valid dynamic slice (Hall, 1995). In this paper this result is extended to show that the union of two static slices is not necessarily a valid slice, based on Weiser's definition of a (static) slice. We also analyse the properties that make the union of different forms of slices a valid slice

    An Introduction to Slice-Based Cohesion and Coupling Metrics

    Get PDF
    This report provides an overview of slice-based software metrics. It brings together information about the development of the metrics from Weiser’s original idea that program slices may be used in the measurement of program complexity, with alternative slice-based measures proposed by other researchers. In particular, it details two aspects of slice-based metric calculation not covered elsewhere in the literature: output variables and worked examples of the calculations. First, output variables are explained, their use explored and standard reference terms and usage proposed. Calculating slice-based metrics requires a clear understanding of ‘output variables’ because they form the basis for extracting the program slices on which the calculations depend. This report includes a survey of the variation in the definition of output variables used by different research groups and suggests standard terms of reference for these variables. Our study identifies four elements which are combined in the definition of output variables. These are the function return value, modified global variables, modified reference parameters and variables printed or otherwise output by the module. Second, slice-based metric calculations are explained with the aid of worked examples, to assist newcomers to the field. Step-by-step calculations of slice-based cohesion and coupling metrics based on the vertices output by the static analysis tool CodeSurfer (R) are presented and compared with line-based calculations

    ConSIT: A conditioned program slicer

    Get PDF
    Conditioned slicing is a powerful generalisation of static and dynamic slicing which has applications to many problems in software maintenance and evolution, including reuse, reengineering and program comprehension. However there has been relatively little work on the implementation of conditioned slicing. Algorithms for implementing conditioned slicing necessarily involve reasoning about the values of program predicates in certain sets of states derived from the conditioned slicing criterion, making implementation particularly demanding. The paper introduces ConSIT, a conditioned slicing system which is based upon conventional static slicing, symbolic execution and theorem proving. ConSIT is the first fully automated implementation of conditioned slicing. An implementation of ConSIT is available for experimentation at &http://www.mcs.gold.ac.uk/tilde/~mas01sd/consit.htm

    Methods and metrics for selective regression testing

    Get PDF
    In corrective software maintenance, selective regression testing includes test selection from previously-run test suites and test coverage identification. We propose three reduction-based regression test selection methods and two McCabe-based coverage identification metrics (T. McCabe, 1976). We empirically compare these methods with three other reduction- and precision-oriented methods, using 60 test problems. The comparison shows that our proposed methods yield favourable result

    VADA: A transformation-based system for variable dependence analysis

    Get PDF
    Variable dependence is an analysis problem in which the aim is to determine the set of input variables that can affect the values stored in a chosen set of intermediate program variables. This paper shows the relationship between the variable dependence analysis problem and slicing and describes VADA, a system that implements variable dependence analysis. In order to cover the full range of C constructs and features, a transformation to a core language is employed Thus, the full analysis is required only for the core language, which is relatively simple. This reduces the overall effort required for dependency analysis. The transformations used need preserve only the variable dependence relation, and therefore need not be meaning preserving in the traditional sense. The paper describes how this relaxed meaning further simplifies the transformation phase of the approach. Finally, the results of an empirical study into the performance of the system are presented

    Code extraction algorithms which unify slicing and concept assignment

    Get PDF
    One approach to reverse engineering is to partially automate subcomponent extraction, improvement and subsequent recombination. Two previously proposed automated techniques for supporting this activity are slicing and concept assignment. However, neither is directly applicable in isolation; slicing criteria (sets of program variables) are simply too low level in many cases, while concept assignment typically fails to produce executable subcomponents. This paper introduces a unification of slicing and concept assignment which exploits their combined advantages, while overcoming their individual weaknesses. Our 'concept slices' are extracted using high level criteria, while producing executable subprograms. The paper introduces three ways of combining slicing, and concept assignment and algorithms for each. The application of the concept slicing algorithms is illustrated with a case study from a large financial organisation

    Node coarsening calculi for program slicing

    Get PDF
    Several approaches to reverse and re-engineering are based upon program slicing. Unfortunately, for large systems, such as those which typically form the subject of reverse engineering activities, the space and time requirements of slicing can be a barrier to successful application. Faced with this problem, several authors have found it helpful to merge control flow graph (CFG) nodes, thereby improving the space and time requirements of standard slicing algorithms. The node-merging process essentially creates a 'coarser' version of the original CFG. The paper introduces a theory for defining control flow graph node coarsening calculi. The theory formalizes properties of interest, when coarsening is used as a precursor to program slicing. The theory is illustrated with a case study of a coarsening calculus, which is proved to have the desired properties of sharpness and consistency

    The Impact of Systematic Edits in History Slicing

    Full text link
    While extracting a subset of a commit history, specifying the necessary portion is a time-consuming task for developers. Several commit-based history slicing techniques have been proposed to identify dependencies between commits and to extract a related set of commits using a specific commit as a slicing criterion. However, the resulting subset of commits become large if commits for systematic edits whose changes do not depend on each other exist. We empirically investigated the impact of systematic edits on history slicing. In this study, commits in which systematic edits were detected are split between each file so that unnecessary dependencies between commits are eliminated. In several histories of open source systems, the size of history slices was reduced by 13.3-57.2% on average after splitting the commits for systematic edits.Comment: 5 pages, MSR 201
    corecore