5 research outputs found

    Reverse Engineering of Computer-Based Navy Systems

    Get PDF
    The financial pressure to meet the need for change in computer-based systems through evolution rather than through revolution has spawned the discipline of reengineering. One driving factor of reengineering is that it is increasingly becoming the case that enhanced requirements placed on computer-based systems are overstressing the processing resources of the systems. Thus, the distribution of processing load over highly parallel and distributed hardware architectures has become part of the reengineering process for computer-based Navy systems. This paper presents an intermediate representation (IR) for capturing features of computer-based systems to enable reengineering for concurrency. A novel feature of the IR is that it incorporates the mission critical software architecture, a view that enables information to be captured at five levels of granularity: the element/program level, the task level, the module/class/package level, the method/procedure level, and the statement/instruction level. An approach to reverse engineering is presented, in which the IR is captured, and is analyzed to identify potential concurrency. Thus, the paper defines concurrency metrics to guide the reengineering tasks of identifying, enhancing, and assessing concurrency, and for performing partitioning and assignment. Concurrency metrics are defined at several tiers of the mission critical software architecture. In addition to contributing an approach to reverse engineering for computer-based systems, the paper also discusses a reverse engineering analysis toolset that constructs and displays the IR and the concurrency metrics for Ada programs. Additionally, the paper contains a discussion of the context of our reengineering efforts within the United States Navy, by describing two reengineering projects focused on sussystems of the AEGIS Weapon System

    Partial VLSI implementation of the architecture for reusable components (ARC)

    Get PDF
    This work describes a novel VLSI implementation of the Architecture for Reusable Components (ARC) processor, using Hardware Description Language (HDL). The main goal here is to achieve efficient execution of reusable software through proper hardware support. This involves the hard wired implementation of each instruction designed for the ARC processor. Instructions are broken down into their logical functions, then modeled and simulated through the hierarchical design methods that HDL offers. The structural model of the processor has been developed and simulated. The purpose here has been to begin work on the design and implementation of the ARC processor. The instructions were built using HDL modules, and then simulated using a logic simulator. The effect of internal propagation delays in the execution of the logic modules have been investigated. Changes in delay parameters have been applied to obtain correct logic transfer operations. The redundancy in the logic transfer operations have also been investigated to see parallelism at the instruction execution level

    Extracting parallelism at compile-time through dependence analysis & cloning techniques in an object-based paradigm

    Get PDF
    The construct of Abstract Data Type (ADT) modules and Abstract Data Object (ADO) modules supported by most object-based languages are a great source for developing reusable code. To improve the run time performance of such object-based programs, we consider the asynchronous remote procedure call (ARPC) model of parallel execution, in which concurrency is achieved by having the caller and the callee (which are module instances) running on different processors. Frequently, an ADT module is needed simultaneously by other modules, thus causing contention. To resolve this, we clone the module instance in demand and distribute the copies across different processors, so that multiple clients can access the code concurrently. For identifying the facilities causing bottlenecks to the ARPC model, the dependence relations of the code is analyzed at compile-time. Instance dependences of the code are also analyzed in addition to conventional dependences to reveal the potential concurrency, and an upper bound on the number of clones of each facility that could be used in an application is determined. This parallelism information could be used by the assignment and the scheduling algorithms in the run time environment of the application for constructing a feasible real-time schedule, statically

    Identifying and exploiting concurrency in object-based real-time systems

    Get PDF
    The use of object-based mechanisms, i.e., abstract data types (ADTs), for constructing software systems can help to decrease development costs, increase understandability and increase maintainability. However, execution efficiency may be sacrificed due to the large number of procedure calls, and due to contention for shared ADTs in concurrent systems. Such inefficiencies are a concern in real-time applications that have stringent timing requirements. To address these issues, the potentially inefficient procedure calls are turned into a source of concurrency via asynchronous procedure calls (ARPCs), and contention for shared ADTS is reduced via ADT cloning. A framework for concurrency analysis in object-based systems is developed, and compiler techniques for identifying potential concurrency via ARPCs and cloning are introduced. Exploitation of the parallelizing compiler techniques is illustrated in the context of an incremental schedule construction algorithm that enhances concurrency incrementally so that feasible real-time schedules can be constructed. Experimental results show large speedup gains with these techniques. Additionally, experiments show that the concurrency enhancement techniques are often useful in constructing feasible schedules for hard real-time systems

    Pre-run-time scheduling of distributed real-time systems : models and algorithms

    Get PDF
    corecore