62 research outputs found
A combined representation for the maintenance of C programs
A programmer wishing to make a change to a piece of code must first gain a full understanding of the behaviours and functionality involved. This process of program comprehension is difficult and time consuming, and often hindered by the absence of useful program documentation. Where documentation is absent, static analysis techniques are often employed to gather programming level information in the form of data and control flow relationships, directly from the source code itself. Software maintenance environments are created by grouping together a number of different static analysis tools such as program sheers, call graph builders and data flow analysis tools, providing a maintainer with a selection of 'views' of the subject code. However, each analysis tool often requires its own intermediate program representation (IPR). For example, an environment comprising five tools may require five different IPRs, giving repetition of information and inefficient use of storage space. A solution to this problem is to develop a single combined representation which contains all the program relationships required to present a maintainer with each required code view. The research presented in this thesis describes the Combined C Graph (CCG), a dependence-based representation for C programs from which a maintainer is able to construct data and control dependence views, interprocedural control flow views, program slices and ripple analyses. The CCG extends earlier dependence-based program representations, introducing language features such as expressions with embedded side effects and control flows, value returning functions, pointer variables, pointer parameters, array variables and structure variables. Algorithms for the construction of the CCG are described and the feasibility of the CCG demonstrated by means of a C/Prolog based prototype implementation
Compile-Time Analysis on Programs with Dynamic Pointer-Linked Data Structures
This paper studies static analysis on programs
that create and traverse dynamic pointer-linked data structures.
It introduces a new type of auxiliary structures, called {\em link graphs},
to depict the alias information of pointers and connection relationships
of dynamic pointer-linked data structures.
The link graphs can be used by compilers to detect side effects,
to identify the patterns of traversal, and to gather the
DEF-USE information of dynamic pointer-linked data structures.
The results of the above compile-time analysis are essential
for parallelization and optimizations on communication and
synchronization overheads.
Algorithms that perform compile-time analysis on side effects
and DEF-USE information using link graphs will be proposed
Parameterized Object Sensitivity for Points-to Analysis for Java
The goal of points-to analysis for Java is to determine the set of objects pointed to by a reference variable or a reference object field. We present object sensitivity, a new form of context sensitivity for flow-insensitive points-to analysis for Java. The key idea of our approach is to analyze a method separately for each of the object names that represent runtime objects on which this method may be invoked. To ensure flexibility and practicality, we propose a parameterization framework that allows analysis designers to control the tradeo#s between cost and precision in the object-sensitive analysis
Extracting Reusable Functions by Program Slicing
An alternative approach to developing reusable components from
scratch is to recover them from existing systems. In this paper, we apply
program slicing, introduced by Weiser, to the problem of extracting
reusable functions from ill-structured programs. We extend the definition
of program slice to a transform slice, one that includes statements which
contribute directly or indirectly to transform a set of input variables
into a set of output variables. Unlike conventional program slicing,
these statements do not include neither the statements necessary to get
input data nor the statements which test the binding conditions of the
function. Transform slicing presupposes the knowledge that a function is
performed in the code and its partial specification, only in terms of
input and output data. Using domain knowledge we discuss how to formulate
expectations of the functions implemented in the code. In addition to the
input/output parameters of the function, the slicing criterion depends on
an initial statement which is difficult to obtain for large programs.
Using the notions of decomposition slice and concept validation we
demonstrate how to produce a set of candidate functions, which are
independent of line numbers but must be evaluated with respect to the
expected behavior. Although human interaction is required, the limited
size of candidate functions makes this task easier than looking for the
last function instruction in the original source code.
(Also cross-referenced as UMIACS-TR-96-13
Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria
This paper reports an experimental study investigating the effectiveness of two code-based test adequacy criteria for identifying sets of test cases that detect faults. The alledges and all-D Us (modified all-uses) coverage criteria were applied to 130 faulty program versions derived from seven moderate size base programs by seeding realistic faults. We generated several thousand test sets for each faulty program and examined the relationship between fault detection and coverage. Within the limited domain of our experiments, test sets achieving coverage levels over 90?Zo usually showed sigrdjlcantly better fault detection than randomly chosen test sets of the same size. In addition, sigrd$cant improvements in the effectiveness of coverage-based tests usually occurred as coverage increased from 90 % to 100Yo. Howeve ~ the results also indicate that 100?Zo code coverage alone is not a reliable indicator of the effectiveness of a test set. We also found that tests based respectively on controljlow and dataflow criteria are frequently complementary in their effectiveness
Chopping: A generalization of slicing
A new method for extracting partial representations of a program is described. Given two sets of variable instances, source and sink, a graph is constructed showing the statements that cause definitions of source to affect uses of sink. This criterion can express a wider range of queries than the various forms of slice criteria, which it subsumes as special cases. On the standard slice criterion (backward slicing from a use or definition) it produces better results than existing algorithms. The method is modular. By treating all statements abstractly as def-use relations, it can present a procedure call as a simple statement, so that it appears in the graph as a single node whose role may be understood without looking beyond the context of the call
Identifying reusable functions in code using specification driven techniques
The work described in this thesis addresses the field of software reuse. Software reuse is widely considered as a way to increase the productivity and improve the quality and reliability of new software systems. Identifying, extracting and reengineering software. components which implement abstractions within existing systems is a promising cost-effective way to create reusable assets. Such a process is referred to as reuse reengineering. A reference paradigm has been defined within the RE(^2) project which decomposes a reuse reengineering process in five sequential phases. In particular, the first phase of the reference paradigm, called Candidature phase, is concerned with the analysis of source code for the identification of software components implementing abstractions and which are therefore candidate to be reused. Different candidature criteria exist for the identification of reuse-candidate software components. They can be classified in structural methods (based on structural properties of the software) and specification driven methods (that search for software components implementing a given specification).In this thesis a new specification driven candidature criterion for the identification and the extraction of code fragments implementing functional abstractions is presented. The method is driven by a formal specification of the function to be isolated (given in terms of a precondition and a post condition) and is based on the theoretical frameworks of program slicing and symbolic execution. Symbolic execution and theorem proving techniques are used to map the specification of the functional abstractions onto a slicing criterion. Once the slicing criterion has been identified the slice is isolated using algorithms based on dependence graphs. The method has been specialised for programs written in the C language. Both symbolic execution and program slicing are performed by exploiting the Combined C Graph (CCG), a fine-grained dependence based program representation that can be used for several software maintenance tasks
- …