4,137 research outputs found

    A file server for the DistriX prototype : a multitransputer UNIX system

    Get PDF
    Bibliography: pages 90-94.The DISTRIX operating system is a multiprocessor distributed operating system based on UNIX. It consists of a number of satellite processors connected to central servers. The system is derived from the MINIX operating system, compatible with UNIX Version 7. A remote procedure call interface is used in conjunction with a system wide, end-to-end communication protocol that connects satellite processors to the central servers. A cached file server provides access to all files and devices at the UNIX system call level. The design of the file server is discussed in depth and the performance evaluated. Additional information is given about the software and hardware used during the development of the project. The MINIX operating system has proved to be a good choice as the software base, but certain features have proved to be poorer. The Inmos transputer emerges as a processor with many useful features that eased the implementation

    Status and projections of the NAS program

    Get PDF
    NASA's Numerical Aerodynamic Simulation (NAS) Program has completed development of the initial operating configuration of the NAS Processing System Network (NPSN). This is the first milestone in the continuing and pathfinding effort to provide state-of-the-art supercomputing for aeronautics research and development. The NPSN, available to a nation-wide community of remote users, provides a uniform UNIX environment over a network of host computers ranging from the Cray-2 supercomputer to advanced scientific workstations. This system, coupled with a vendor-independent base of common user interface and network software, presents a new paradigm for supercomputing environments. Background leading to the NAS program, its programmatic goals and strategies, technical goals and objectives, and the development activities leading to the current NPSN configuration are presented. Program status, near-term plans, and plans for the next major milestone, the extended operating configuration, are also discussed

    HLA high performance and real-time simulation studies with CERTI

    Get PDF
    Our work takes place in the context of the HLA standard and its application in real-time systems context. Indeed, current HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to provide real-time capabilities to Run Time Infrastructures (RTI). This paper describes our approach focusing on achieving hard real-time properties for HLA federations through a complete state of the art on the related domain. Our paper also proposes a global bottom up approach from basic hardware and software basic requirements to experimental tests for validation of distributed real-time simulation with CERTI

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Autonomous sensor-based dual-arm satellite grappling

    Get PDF
    Dual-arm satellite grappling involves the integration of technologies developed in the Sensing and Perception (S&P) Subsystem for object acquisition and tracking, and the Manipulator Control and Mechanization (MCM) Subsystem for dual-arm control. S&P acquires and tracks the position, orientation, velocity, and angular velocity of a slowly spinning satellite, and sends tracking data to the MCM subsystem. MCM grapples the satellite and brings it to rest, controlling the arms so that no excessive forces or torques are exerted on the satellite or arms. A 350-pound satellite mockup which can spin freely on a gimbal for several minutes, closely simulating the dynamics of a real satellite is demonstrated. The satellite mockup is fitted with a panel under which may be mounted various elements such as line replacement modules and electrical connectors that will be used to demonstrate servicing tasks once the satellite is docked. The subsystems are housed in three MicroVAX II microcomputers. The hardware of the S&P Subsystem includes CCD cameras, video digitizers, frame buffers, IMFEX (a custom pipelined video processor), a time-code generator with millisecond precision, and a MicroVAX II computer. Its software is written in Pascal and is based on a locally written vision software library. The hardware of the MCM Subsystem includes PUMA 560 robot arms, Lord force/torque sensors, two MicroVAX II computers, and unimation pneumatic parallel grippers. Its software is written in C, and is based on a robot language called RCCL. The two subsystems are described and test results on the grappling of the satellite mockup with rotational rates of up to 2 rpm are provided

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented

    DistriX : an implementation of UNIX on transputers

    Get PDF
    Bibliography: pages 104-110.Two technologies, distributed operating systems and UNIX are very relevant in computing today. Many distributed systems have been produced and many are under development. To a large extent, distributed systems are considered to be the only way to solve the computing needs of the future. UNIX, on the other hand, is becoming widely recognized as the industry standard for operating systems. The transputer, unlike. UNIX and distributed systems is a relatively new innovation. The transputer is a concurrent processing machine based on mathematical principles. Increasingly, the transputer is being used to solve a wide range of problems of a parallel nature. This thesis combines these three aspects in creating a distributed implementation of UNIX on a network of transputers. The design is based on the satellite model. In this model a central controlling processor is surrounded by worker processors, called satellites, in a master/ slave relationship

    Upper Atmosphere Research Satellite (UARS) trade analysis

    Get PDF
    The Upper Atmosphere Research Satellite (UARS) which will collect data pertinent to the Earth's upper atmosphere is described. The collected data will be sent to the central data handling facility (CDHF) via the UARS ground system and the data will be processed and distributed to the remote analysis computer systems (RACS). An overview of the UARS ground system is presented. Three configurations were developed for the CDHF-RACS system. The CDHF configurations are discussed. The IBM CDHF configuration, the UNIVAC CDHF configuration and the vax cluster CDHF configuration are presented. The RACS configurations, the IBM RACS configurations, UNIVAC RACS and VAX RACS are detailed. Due to the large on-line data estimate to approximately 100 GB, a mass storage system is considered essential to the UARS CDHF. Mass storage systems were analyzed and the Braegan ATL, the RCA optical disk, the IBM 3850 and the MASSTOR M860 are discussed. It is determined that the type of mass storage system most suitable to UARS is the automated tape/cartridge device. Two devices of this type, the IBM 3850 and the MASSTOR MSS are analyzed and the applicable tape/cartridge device is incorporated into the three CDHF-RACS configurations

    The Generic Spacecraft Analyst Assistant (gensaa): a Tool for Developing Graphical Expert Systems

    Get PDF
    During numerous contacts with a satellite each day, spacecraft analysts must closely monitor real-time data. The analysts must watch for combinations of telemetry parameter values, trends, and other indications that may signify a problem or failure. As the satellites become more complex and the number of data items increases, this task is becoming increasingly difficult for humans to perform at acceptable performance levels. At NASA GSFC, fault-isolation expert systems are in operation supporting this data monitoring task. Based on the lessons learned during these initial efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analyst Assistant (GenSAA) is being developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. Although initially domain-specific in nature, this powerful tool will readily support the development of highly graphical expert systems for data monitoring purposes throughout the space and commercial industry
    corecore