19 research outputs found

    An experimental examination of program maintainability as a function of structuredness

    Get PDF
    The general ethos of producing structured programs has been, at least in theory, adopted throughout the software engineering community. By studying and measuring the structure of existing software we can estimate the benefits to be gained from changes in the structure in terms of the external attributes (perceived behaviour) of the re- structured software. [13, 2, 3, 10, 6, 7]. In this paper we report the results of two controlled experiments measuring the improvement on the maintainability of differently structured code. These experiments build on the experience and insights gained through an earlier experiment [5]. We discuss a strategy for re-structuring based on an improved re-engineering factor [9] and present the static measures of morphology (depth and width of module calls), coupling and cohesion and module complexity of a range of programs. By plotting these measures and adopting target values (e.g. width of call< 5) we estimate the expected improvement in the maintainability after re-engineering. We subsequently carry out the re-engineering, measure the re-structured code statically and measure the actual maintainability experimentally. The results reveal that unstructured programmes take longer to 'reveal their secrets'. An integral part of this work are the design and execution of controlled experiments as well as the use of automated tools for the static analysis of code and the recording of the experimental data

    Memoization Attacks and Copy Protection in Partitioned Applications

    Get PDF
    Application source code protection is a major concern for software architects today. Secure platforms have been proposed that protect the secrecy of application algorithms and enforce copy protection assurances. Unfortunately, these capabilities incur a sizeable performance overhead. Partitioning an application into secure and insecure regions can help diminish these overheads but invalidates guarantees of code secrecy and copy protection.This work examines one of the problems of partitioning an application into public and private regions, the ability of an adversary to recreate those private regions. To our knowledge, it is the first to analyze this problem when considering application operation as a whole. Looking at the fundamentals of the issue, we analyze one of the simplest attacks possible, a ``Memoization Attack.'' We implement an efficient Memoization Attack and discuss necessary techniques that limit storage and computation consumption. Experimentation reveals that certain classes of real-world applications are vulnerable to Memoization Attacks. To protect against such an attack, we propose a set of indicator tests that enable an application designer to identify susceptible application code regions

    Clustering large software systems at multiple layers

    Get PDF
    Abstract Software clustering algorithms presented in the literature rarely incorporate in the clustering process dynamic information, such as the number of function invocations during runtime. Moreover, the structure of a software system is often multi-layered, while existing clustering algorithms often create flat system decompositions. This paper presents a software clustering algorithm called MULICsoft that incorporates in the clustering process both static and dynamic information. MULICsoft produces layered clusters with the core elements of each cluster assigned to the top layer. We present experimental results of applying MULICsoft to a large opensource system. Comparison with existing software clustering algorithms indicates that MULICsoft is able to produce decompositions that are close to those created by system experts

    Extracting Reusable Functions by Program Slicing

    Get PDF
    An alternative approach to developing reusable components from scratch is to recover them from existing systems. In this paper, we apply program slicing, introduced by Weiser, to the problem of extracting reusable functions from ill-structured programs. We extend the definition of program slice to a transform slice, one that includes statements which contribute directly or indirectly to transform a set of input variables into a set of output variables. Unlike conventional program slicing, these statements do not include neither the statements necessary to get input data nor the statements which test the binding conditions of the function. Transform slicing presupposes the knowledge that a function is performed in the code and its partial specification, only in terms of input and output data. Using domain knowledge we discuss how to formulate expectations of the functions implemented in the code. In addition to the input/output parameters of the function, the slicing criterion depends on an initial statement which is difficult to obtain for large programs. Using the notions of decomposition slice and concept validation we demonstrate how to produce a set of candidate functions, which are independent of line numbers but must be evaluated with respect to the expected behavior. Although human interaction is required, the limited size of candidate functions makes this task easier than looking for the last function instruction in the original source code. (Also cross-referenced as UMIACS-TR-96-13

    Clustering Classes in Packages for Program Comprehension

    Get PDF

    Studying the evolution of software through software clustering and concept analysis

    Get PDF
    This thesis describes an investigation into the use of software clustering and concept analysis techniques for studying the evolution of software. These techniques produce representations of software systems by clustering similar entities in the system together. The software engineering community has used these techniques for a number of different reasons but this is the first study to investigate their uses for evolution. The representations produced by software clustering and concept analysis techniques can be used to trace changes to a software system over a number of different versions of the system. This information can be used by system maintainers to identify worrying evolutionary trends or assess a proposed change by comparing it to the effects of an earlier, similar change. The work described here attempts to establish whether the use of software clustering and concept analysis techniques for studying the evolution of software is worth pursuing. Four techniques, chosen based on an extensive literature survey of the field, have been used to create representations of versions of a test software system. These representations have been examined to assess whether any observations about the evolution of the system can be drawn from them. The results are positive and it is thought that evolution of software systems could be studied by using these techniques

    A Multi-Faceted Software Reusability Model: The Quintet Web.

    Get PDF
    Over the past decade, problems in software development and maintenance have increased rapidly. The size of software has grown explosively, complexity of software applications has increased, the nature of software applications has become more critical, and large quantities of software to be maintained have accumulated. During the same time, the productivity of individual software engineers has not improved proportionally. Software reusability is widely believed to be a key to help overcome the ongoing software crisis by improving software productivity and quality. However, the promise of software reusability has not yet been fulfilled. We present a multi-faceted software reusability model, a Quintet Web, that enables designers to reuse software artifacts from all phases. The Quintet Web consists of links of five types of reuse support information among existing document blocks: semantic, horizontal, hierarchical, syntactic, and alternative relationships. The five types of reuse support information are named the Semantic Web, the Horizontal Web, the Vertical Web, the Syntactic Web, and the Alternative Web. The Semantic Web defines the operational functionality of each software block. The Horizontal Web links functionally identical blocks of all phases. The Vertical Web identifies hierarchical relationships of software blocks. The Syntactic Web forms a chain from the declaration of each variable to its uses for all variables. The Alternative Web enables software developers to select an alternative algorithm for an identical functionality. The Quintet Web supports both software development and maintenance. When the Quintet Web is applied to development process, it provides software developers a means to check the consistency of the software being developed. The Quintet Web model is independent of a specific software development method

    A Reverse Engineering Methodology for Extracting Parallelism From Design Abstractions.

    Get PDF
    Migration of code from sequential environments to the parallel processing environments is often done in an ad hoc manner. The purpose of this research is to develop a reverse engineering methodology to facilitate systematic migration of code from sequential to the parallel processing environments. The research results include the development of a three-phase methodology and the design and development of a reverse engineering toolkit (abbreviated as RETK) which serves to establish a working model for the methodology. The methodology consists of three phases: Analysis, Synthesis, and Transformation. The Analysis phase uses concepts from reverse engineering research to recover the sequential design description from programs using a new design recovery technique. The Synthesis phase is comprised of processes that compute the data and control dependences by using the design abstractions produced by the Analysis phase to construct the program dependence graph. The Transformation phase consists of processes that require knowledge-based analysis of the program and dependence information produced by the Analysis and Synthesis phases, respectively. Design recommendations for parallel environments are the key output of the Transformation phase. The main components of RETK are an Information Extractor, a Dependence Analyzer, and a Design Assistant that implement the processes of the Analysis, Synthesis, and Transformation phases, respectively. The object-oriented design and implementation of the Information Extractor and Dependence Analyzer are described. The design and implementation of the Design Assistant using C Language Interface Production System (CLIPS) are described. In addition, experimental results of applying the methodology to test programs by RETK are presented. The results include analysis of a Numerical Aerodynamic Simulation (NAS) benchmark program. By uniquely combining research in reverse engineering, dependence analysis, and knowledge-based analysis, the methodology provides a systematic approach for code migration. The benefits of using the methodology are increased comprehensibility and improved efficiency in migrating sequential systems to parallel environments

    Cooperative Based Software Clustering on Dependency Graphs

    Get PDF
    The organization of software systems into subsystems is usually based on the constructs of packages or modules and has a major impact on the maintainability of the software. However, during software evolution, the organization of the system is subject to continual modification, which can cause it to drift away from the original design, often with the effect of reducing its quality. A number of techniques for evaluating a system's maintainability and for controlling the effort required to conduct maintenance activities involve software clustering. Software clustering refers to the partitioning of software system components into clusters in order to obtain both exterior and interior connectivity between these components. It helps maintainers enhance the quality of software modularization and improve its maintainability. Research in this area has produced numerous algorithms with a variety of methodologies and parameters. This thesis presents a novel ensemble approach that synthesizes a new solution from the outcomes of multiple constituent clustering algorithms. The main principle behind this approach derived from machine learning, as applied to document clustering, but it has been modified, both conceptually and empirically, for use in software clustering. The conceptual modifications include working with a variable number of clusters produced by the input algorithms and employing graph structures rather than feature vectors. The empirical modifications include experiments directed at the selection of the optimal cluster merging criteria. Case studies based on open source software systems show that establishing cooperation between leading state-of-the-art algorithms produces better clustering results compared with those achieved using only one of any of the algorithms considered
    corecore