103 research outputs found

    A combined representation for the maintenance of C programs

    Get PDF
    A programmer wishing to make a change to a piece of code must first gain a full understanding of the behaviours and functionality involved. This process of program comprehension is difficult and time consuming, and often hindered by the absence of useful program documentation. Where documentation is absent, static analysis techniques are often employed to gather programming level information in the form of data and control flow relationships, directly from the source code itself. Software maintenance environments are created by grouping together a number of different static analysis tools such as program sheers, call graph builders and data flow analysis tools, providing a maintainer with a selection of 'views' of the subject code. However, each analysis tool often requires its own intermediate program representation (IPR). For example, an environment comprising five tools may require five different IPRs, giving repetition of information and inefficient use of storage space. A solution to this problem is to develop a single combined representation which contains all the program relationships required to present a maintainer with each required code view. The research presented in this thesis describes the Combined C Graph (CCG), a dependence-based representation for C programs from which a maintainer is able to construct data and control dependence views, interprocedural control flow views, program slices and ripple analyses. The CCG extends earlier dependence-based program representations, introducing language features such as expressions with embedded side effects and control flows, value returning functions, pointer variables, pointer parameters, array variables and structure variables. Algorithms for the construction of the CCG are described and the feasibility of the CCG demonstrated by means of a C/Prolog based prototype implementation

    Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria

    Get PDF
    This paper reports an experimental study investigating the effectiveness of two code-based test adequacy criteria for identifying sets of test cases that detect faults. The alledges and all-D Us (modified all-uses) coverage criteria were applied to 130 faulty program versions derived from seven moderate size base programs by seeding realistic faults. We generated several thousand test sets for each faulty program and examined the relationship between fault detection and coverage. Within the limited domain of our experiments, test sets achieving coverage levels over 90?Zo usually showed sigrdjlcantly better fault detection than randomly chosen test sets of the same size. In addition, sigrd$cant improvements in the effectiveness of coverage-based tests usually occurred as coverage increased from 90 % to 100Yo. Howeve ~ the results also indicate that 100?Zo code coverage alone is not a reliable indicator of the effectiveness of a test set. We also found that tests based respectively on controljlow and dataflow criteria are frequently complementary in their effectiveness

    Compile-Time Analysis on Programs with Dynamic Pointer-Linked Data Structures

    Get PDF
    This paper studies static analysis on programs that create and traverse dynamic pointer-linked data structures. It introduces a new type of auxiliary structures, called {\em link graphs}, to depict the alias information of pointers and connection relationships of dynamic pointer-linked data structures. The link graphs can be used by compilers to detect side effects, to identify the patterns of traversal, and to gather the DEF-USE information of dynamic pointer-linked data structures. The results of the above compile-time analysis are essential for parallelization and optimizations on communication and synchronization overheads. Algorithms that perform compile-time analysis on side effects and DEF-USE information using link graphs will be proposed

    OMEN: A strategy for testing object-oriented software

    Full text link

    Structural testing techniques for the selective revalidation of software

    Get PDF
    The research in this thesis addresses the subject of regression testing. Emphasis is placed on developing a technique for selective revalidation which can be used during software maintenance to analyse and retest only those parts of the program affected by changes. In response to proposed program modifications, the technique assists the maintenance programmer in assessing the extent of the program alterations, in selecting a representative set of test cases to rerun, and in identifying any test cases in the test suite which are no longer required because of the program changes. The proposed technique involves the application of code analysis techniques and operations research. Code analysis techniques are described which derive information about the structure of a program and are used to determine the impact of any modifications on the existing program code. Methods adopted from operations research are then used to select an optimal set of regression tests and to identify any redundant test cases. These methods enable software, which has been validated using a variety of structural testing techniques, to be retested. The development of a prototype tool suite, which can be used to realise the technique for selective revalidation, is described. In particular, the interface between the prototype and existing regression testing tools is discussed. Moreover, the effectiveness of the technique is demonstrated by means of a case study and the results are compared with traditional regression testing strategies and other selective revalidation techniques described in this thesis

    Chopping: A generalization of slicing

    Get PDF
    A new method for extracting partial representations of a program is described. Given two sets of variable instances, source and sink, a graph is constructed showing the statements that cause definitions of source to affect uses of sink. This criterion can express a wider range of queries than the various forms of slice criteria, which it subsumes as special cases. On the standard slice criterion (backward slicing from a use or definition) it produces better results than existing algorithms. The method is modular. By treating all statements abstractly as def-use relations, it can present a procedure call as a simple statement, so that it appears in the graph as a single node whose role may be understood without looking beyond the context of the call

    Parameterized Object Sensitivity for Points-to Analysis for Java

    Get PDF
    The goal of points-to analysis for Java is to determine the set of objects pointed to by a reference variable or a reference object field. We present object sensitivity, a new form of context sensitivity for flow-insensitive points-to analysis for Java. The key idea of our approach is to analyze a method separately for each of the object names that represent runtime objects on which this method may be invoked. To ensure flexibility and practicality, we propose a parameterization framework that allows analysis designers to control the tradeo#s between cost and precision in the object-sensitive analysis

    Impact analysis of database schema changes

    Get PDF
    When database schemas require change, it is typical to predict the effects of the change, first to gauge if the change is worth the expense, and second, to determine what must be reconciled once the change has taken place. Current techniques to predict the effects of schema changes upon applications that use the database can be expensive and error-prone, making the change process expensive and difficult. Our thesis is that an automated approach for predicting these effects, known as an impact analysis, can create a more informed schema change process, allowing stakeholders to obtain beneficial information, at lower costs than currently used industrial practice. This is an interesting research problem because modern data-access practices make it difficult to create an automated analysis that can identify the dependencies between applications and the database schema. In this dissertation we describe a novel analysis that overcomes these difficulties. We present a novel analysis for extracting potential database queries from a program, called query analysis. This query analysis builds upon related work, satisfying the additional requirements that we identify for impact analysis. The impacts of a schema change can be predicted by analysing the results of query analysis, using a process we call impact calculation. We describe impact calculation in detail, and show how it can be practically and efficiently implemented. Due to the level of accuracy required by our query analysis, the analysis can become expensive, so we describe existing and novel approaches for maintaining an efficient and computational tractable analysis. We describe a practical and efficient prototype implementation of our schema change impact analysis, called SUITE. We describe how SUITE was used to evaluate our thesis, using a historical case study of a large commercial software project. The results of this case study show that our impact analysis is feasible for large commercial software applications, and likely to be useful in real-world software development

    Extracting Reusable Functions by Program Slicing

    Get PDF
    An alternative approach to developing reusable components from scratch is to recover them from existing systems. In this paper, we apply program slicing, introduced by Weiser, to the problem of extracting reusable functions from ill-structured programs. We extend the definition of program slice to a transform slice, one that includes statements which contribute directly or indirectly to transform a set of input variables into a set of output variables. Unlike conventional program slicing, these statements do not include neither the statements necessary to get input data nor the statements which test the binding conditions of the function. Transform slicing presupposes the knowledge that a function is performed in the code and its partial specification, only in terms of input and output data. Using domain knowledge we discuss how to formulate expectations of the functions implemented in the code. In addition to the input/output parameters of the function, the slicing criterion depends on an initial statement which is difficult to obtain for large programs. Using the notions of decomposition slice and concept validation we demonstrate how to produce a set of candidate functions, which are independent of line numbers but must be evaluated with respect to the expected behavior. Although human interaction is required, the limited size of candidate functions makes this task easier than looking for the last function instruction in the original source code. (Also cross-referenced as UMIACS-TR-96-13
    • …
    corecore