927 research outputs found

    Inter-module code analysis techniques for software maintenance

    Get PDF
    The research described in this thesis addresses itself to the problem of maintaining large, undocumented systems written in languages that contain a module construct. Emphasis is placed on developing techniques for analysing the code of these systems, thereby helping a maintenance programmer to understand a system. Techniques for improving the structure of a system are presented. These techniques help make the code of a system easier to understand. All the code analysis techniques described in this thesis involve reasoning with, and manipulating, graphical representations of a system. To help with these graph manipulations, a set of graph operations are developed that allow a maintenance programmer to combine graphs to create a bigger graph, and to extract subgraphs from a given graph that satisfy specified constraints. A relational database schema is developed to represent the information needed for inter-module code analysis. Pointers are given as to how this database can be used for inter-module code analysis

    A Repository for a CARE Environment

    Get PDF
    Repositories in CASE hold information about the development process and the structure of developing software. The migration or reuse of CASE repositories for CARE (Computer Aided Re-Engineering) is not adequate for the reengineering process. The main reasons for its inadequacy are the emptiness of such repositories and the nature of the process itself. In the following report we will define a CARE architecture, from the reengineering point of view, and derive a structure of a repository appropriate to the reengineering process

    ReSpecTX: Programming Interaction Made Easy

    Get PDF
    In this paper we present the ReSpecTX language, toolchain, and standard library as a first step of a path aimed at closing the gap between coordination languages \u2013 mostly a prerogative of the academic realm until now \u2013 and their industrial counterparts. Since the limited adoption of coordination languages within the industrial realm is also due to the lack of suitable toolchains and libraries of reusable mechanisms, ReSpecTX equips a core coordination language (ReSpecT) with tools and features commonly found in mainstream programming languages. In particular, ReSpecTX makes it possible to provide a reference library of reusable and composable interaction patterns

    Algorithm to layout (ATL) systems for VLSI design

    Get PDF
    PhD ThesisThe complexities involved in custom VLSI design together with the failure of CAD techniques to keep pace with advances in the fabrication technology have resulted in a design bottleneck. Powerful tools are required to exploit the processing potential offered by the densities now available. Describing a system in a high level algorithmic notation makes writing, understanding, modification, and verification of a design description easier. It also removes some of the emphasis on the physical issues of VLSI design, and focus attention on formulating a correct and well structured design. This thesis examines how current trends in CAD techniques might influence the evolution of advanced Algorithm To Layout (ATL) systems. The envisaged features of an example system are specified. Particular attention is given to the implementation of one its features COPTS (Compilation Of Occam Programs To Schematics). COPTS is capable of generating schematic diagrams from which an actual layout can be derived. It takes a description written in a subset of Occam and generates a high level schematic diagram depicting its realisation as a VLSI system. This diagram provides the designer with feedback on the relative placement and interconnection of the operators used in the source code. It also gives a visual representation of the parallelism defined in the Occam description. Such diagrams are a valuable aid in documenting the implementation of a design. Occam has also been selected as the input to the design system that COPTS is a feature of. The choice of Occam was made on the assumption that the most appropriate algorithmic notation for such a design system will be a suitable high level programming language. This is in contrast to current automated VLSI design systems, which typically use a hardware des~ription language for input. These special purpose languages currently concentrate on handling structural/behavioural information and have limited ability to express algorithms. Using a language such as Occam allows a designer to write a behavioural description which can be compiled and executed as a simulator, or prototype, of the system. The programmability introduced into the design process enables designers to concentrate on a design's underlying algorithm. The choice of this algorithm is the most crucial decision since it determines the performance and area of the silicon implementation. The thesis is divided into four sections, each of several chapters. The first section considers VLSI design complexity, compares the expert systems and silicon compilation approaches to tackling it, and examines its parallels with software complexity. The second section reviews the advantages of using a conventional programming language for VLSI system descriptions. A number of alternative high level programming languages are considered for application in VLSI design. The third section defines the overall ATL system COPTS is envisaged to be part of, and considers the schematic representation of Occam programs. The final section presents a summary of the overall project and suggestions for future work on realising the full ATL system

    Early detection of ripple propagation in evolving software systems

    Get PDF
    Ripple effect analysis is the analysis of the consequential knock on effects of a change to a software system. In the first part of this study, ripple effect analysis methods are classified into several categories based on the types of information the methods analyse and produce. A comparative and analytical study of methods from these categories was performed in an attempt to assist maintainers in the selection of ripple effect analysis methods for use in different phases of the software maintenance process. It was observed that existing methods are most usable in the later stages of the software maintenance process and not at an early stage when strategic decisions concerning project scheduling are made. The second part of the work, addresses itself to the problem of tracing the ripple effect of a change, at a stage earlier in the maintenance process than existing ripple effect analysis methods allow. Particular emphasis is placed upon the development of ripple effect analysis methods for analysing system documentation. The ripple effect analysis methods described in this thesis involve manipulating a novel graph theory model called a Ripple Propagation Graph. The model is based on the thematic structure of documentation, previous release information and expert judgement concerning potential ripple effects. In the third part of the study the Ripple Propagation Graph model and the analysis methods are applied and evaluated, using examples of documentation structure and a major case study

    Conceptual information processing: A robust approach to KBS-DBMS integration

    Get PDF
    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing

    SAGA: A project to automate the management of software production systems

    Get PDF
    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management

    Routing information and presentation

    Get PDF

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
    corecore