66 research outputs found

    All Source Analysis System (ASAS): Migration from VAX to Alpha AXP computer systems

    Get PDF
    The Jet Propulsion Laboratory's (JPL's) experience migrating existing VAX applications to Digital Equipment Corporation's new Alpha AXP processor is covered. The rapid development approach used during the 10-month period required to migrate the All Source Analysis System (ASAS), 1.5 million lines of FORTRAN, C, and Ada code, is also covered. ASAS, an automated tactical intelligence system, was developed by the Jet Propulsion Laboratory for the U. S. Army. Other benefits achieved as a result of the significant performance improvements provided by Alpha AXP platform are also described

    RE-ENGINEERING ATLAS SYSTEMS WITH WATLAS

    Get PDF
    The ATLAS language is a legacy language that currently runs on OpenVMS systems. Here, we describe a Windows-based system for re-engineering existing ATLAS applications, transforming them into equivalent C# source code that can be compiled and executed on Windows. ATLAS is used for developing testing programs that interact with avionics systems connected to a test station. ATLAS utilizes automated test equipment to issue commands and interrogate results in response to direct stimulus and signals from the unit under test. Windows-based ATLAS, or WATLAS, is composed of a Rascal-based transpiler, a "pre" and "post" processor, the target environment framework, and a Windows-based CASS Station simulator to execute the transpiled target source code. The thesis also provides an overview of the legacy CASS station and the ATLAS language, motivation for development of WATLAS, and a review of some of the competing technologies in this information technology space. Finally, a working prototype with minimal functionality will demonstrate the viability of this approach

    Re-Architecting Mass Storage Input/Output for Performance and Efficiency

    Full text link
    The semantics and fundamental structure of modern operating system IO systems dates from the mid-1960\u27s to the mid-1970\u27s, a period of time when computing power and memory capacity were a mere fraction of today\u27s systems. Engineering tradeoffs made in the past enshrine the resource availability context of computing at that time. Deconstructing the semantics of the IO infrastructure allows a re-examination of long-standing design decisions in the context of today\u27s greater processing and memory resources. The re-examination allows changes to several wide-spread paradigms to improve efficiency and performance

    Next generation satellite orbital control system

    Get PDF
    Selection of the correct software architecture is vital for building successful software-intensive systems. Its realization requires important decisions about the organization of the system and by and large permits or prevents a system\u27s acceptance and quality attributes such as performance and reliability. The correct architecture is essential for program success while the wrong one is a formula for disaster. In this investigation, potential software architectures for the Next Generation Satellite Orbital Control System (NG-SOCS) are developed from compiled system specifications and a review of existing technologies. From the developed architectures, the recommended architecture is selected based on real-world considerations that face corporations today, including maximizing code reuse, mitigation of project risks and the alignment of the solution with business objectives

    High speed simulation of microprocessor systems using LTU dynamic binary translation

    Get PDF
    This thesis presents new simulation techniques designed to speed up the simulation of microprocessor systems. The advanced simulation techniques may be applied to the simulator class which employs dynamic binary translation as its underlying technology. This research supports the hypothesis that faster simulation speeds can be realized by translating larger sections of the target program at runtime. The primary motivation for this research was to help facilitate comprehensive design-space exploration and hardware/software co-design of novel processor architectures by reducing the time required to run simulations. Instruction set simulators are used to design and to verify new system architectures, and to develop software in parallel with hardware. However, compromises must often be made when performing these tasks due to time constraints. This is particularly true in the embedded systems domain where there is a short time-to-market. The processing demands placed on simulation platforms are exacerbated further by the need to simulate the increasingly complex, multi-core processors of tomorrow. High speed simulators are therefore essential to reducing the time required to design and test advanced microprocessors, enabling new systems to be released ahead of the competition. Dynamic binary translation based simulators typically translate small sections of the target program at runtime. This research considers the translation of larger units of code in order to increase simulation speed. The new simulation techniques identify large sections of program code suitable for translation after analyzing a profile of the target program’s execution path built-up during simulation. The average instruction level simulation speed for the EEMBC benchmark suite is shown to be at least 63% faster for the new simulation techniques than for basic block dynamic binary translation based simulation and 14.8 times faster than interpretive simulation. The average cycle-approximate simulation speed is shown to be at least 32% faster for the new simulation techniques than for basic block dynamic binary translation based simulation and 8.37 times faster than cycle-accurate interpretive simulation

    Wide-address operating system elements

    Get PDF

    Characterizing Shared Memory Multiprocessor Benchmarks for Future Chip Multiprocessor Architectures Using Instruction Flow Analysis

    Get PDF
    For forty years, transistor counts on integrated circuits have doubled roughly every two years, enabling computer architects to double the clock speed of processors. Recently, heat dissipation and power consumption trends have forced chip designers to add larger caches and more cores per chip, instead of increasing clock speed with the extra transistors. This has provided challenges for programmers who wish to continue increasing application performance as though the speed of a uniprocessor had continued doubling. In this characteristic study, we examine the effect of the operating system on a set of parallel benchmarks run on a simulated many-core processor. Past research has shown that the performance of the OS code has a large impact on application performance; however, most studies ignore the OS and focus on the application code. This work will characterize performance bottlenecks and show possible areas that could be improved. We found that resource contention in the kernel was limiting the efficiency of the benchmarks

    Factory-installation of software on workstations and servers

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 69).by H. Earl Hones, III.S.M

    The Goddard High Resolution Spectrograph Scientific Support Contract

    Get PDF
    In 1988, Computer Sciences Corporation (CSC) was selected as the Goddard High Resolution Spectrograph (GHRS) Scientific Support Contractor (SSC). This was to have been a few months before the launch of NASA's first Great Observatory, the Hubble Space Telescope (HST). As one of five scientific instruments on HST, the GHRS was designed to obtain spectra in the 1050-3300 A ultraviolet wavelength region with a resolving power, lambda/Delta(lambda) , of up to 100,000 and relative photometric accuracy to 1%. It was built by Ball AeroSpace Systems Group under the guidance of the GHRS Investigation Definition Team (IDT), comprised of 16 scientists from the US and Canada. After launch, the IDT was to perform the initial instrument calibration and execute a broad scientific program during a five-year Guaranteed Time Observation (GTO) period. After a year's delay, the launch of HST occurred in April 1990, and CSC participated in the in-orbit calibration and first four years of GTO observations with the IDT. The HST primary mirror suffered from spherical aberration, which reduced the spatial and spectral resolution of Large Science Aperture (LSA) observations and decreased the throughput of the Small Science Aperture (SSA) by a factor of two. Periodic problems with the Side 1 carrousel electronics and anomalies with the low-voltage power supply finally resulted in a suspension of the use of Side 1 less than two years after launch. At the outset, the GHRS SSC task involved work in four areas: 1) to manage and operate the GHRS Data Analysis Facility (DAF); 2) to support the second Servicing Mission Observatory Verification (SMOV) program, as well as perform system engineering analysis of the GHRS as nesessary; 3) to assist the GHRS IDT with their scientific research programs, particularly the GSFC members of the team, and 4) to provide administrative and logistic support for GHRS public information and educational activities
    • …
    corecore