290 research outputs found

    MSFC Skylab attitude and pointing control system mission evaluation

    Get PDF
    The results of detailed performance analyses of the attitude and pointing control system in-orbit hardware and software on Skylab are reported. Performance is compared with requirements, test results, and prelaunch predictions. A brief history of the altitude and pointing control system evolution leading to the launch configuration is presented. The report states that the attitude and pointing system satisfied all requirements

    MSFC Skylab program engineering and integration

    Get PDF
    A technical history and managerial critique of the MSFC role in the Skylab program is presented. The George C. Marshall Space Flight Center had primary hardware development responsibility for the Saturn Workshop Modules and many of the designated experiments in addition to the system integration responsibility for the entire Skylab Orbital Cluster. The report also includes recommendations and conclusions applicable to hardware design, test program philosophy and performance, and program management techniques with potential application to future programs

    System design of the Pioneer Venus spacecraft. Volume 8: Command/data handling subsystems studies

    Get PDF
    Study tasks for the command and data handling subsystems have been directed to: (1) determining ground data systems, (GDS) interfaces and deep space network (DSN) changes, if required, (2) defining subsystem requirements, (3) surveying existing hardware that could be used or modified to meet subsystem requirements, and (4) establishing a baseline design. Study of the existing GDS led to the conclusion that the Viking configuration GDS can be used with only minor changes required for the Pioneer Venus baseline. Those changes required are associated with providing a predetection recording capability used during probe entry and descent. Subsystem requirements were first formulated with sufficient latitude so that surveys of existing hardware could lead to low cost hardware which, in turn, could modify more narrowly defined subsystem requirements

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Nascom System Development Plan: System Description, Capabilities and Plans

    Get PDF
    The NASA Communications (Nascom) System Development Plan (NSDP), reissued annually, describes the organization of Nascom, how it obtains communication services, its current systems, its relationship with other NASA centers and International Partner Agencies, some major spaceflight projects which generate significant operational communication support requirements, and major Nascom projects in various stages of development or implementation

    The Scalability of Multicast Communication

    Get PDF
    Multicast is a communication method which operates on groups of applications. Having multiple instances of an application which are addressed collectively using a unique, multicast address, allows elegant solutions to some of the more intractable problems in distributed programming, such as providing fault tolerance. However, as multicast techniques are applied in areas such as distributed operating systems, where the operating system may span a large number of hosts, or on faster network architectures, where the problems of congestion reduce the effectiveness of the technique, then the scalability of multicast must be addressed if multicast is to gain a wider application. The main scalability issue was considered to be packet loss due to buffer overrun, the most common cause of this buffer overrun being the mismatch in packet arrival rate and packet consumption at the multicast originator, the so-called implosion problem. This issue affects positively acknowledged and transactional protocols. As these two techniques are the most common protocol designs, it was felt that an investigation into the problems of these types of protocol would be most effective. A model for implosion was developed which was simulated in order to investigate the parameters of implosion. A measure of this implosion was derived from the data, this index of implosion allowing the severity of implosion to be described as well as the location of the implosion in the model. This implosion index was derived by dividing the rate at which buffers were occupied by the rate at which packets were generated by the model. The value may then be used to predict the number of buffers required given the number of packets expected. A number of techniques were developed which may be used to offset implosion, either by artificially increasing the inter-packet gap, or by distributing replies so that no one host receives enough packets to cause an implosion. Of these alternatives, the latter offers the most promise, although requiring a large effort to maintain the resulting hierarchical structure in the presence of multiple failures

    The effect of diverse development goals on computer-based system dependability

    Get PDF
    PhD ThesisSociety's increasing dependence upon software control and information process- ing provision has demanded comparable increases in software dependability. While the existing software dependability approach has resulted in significant improve- ments, its focus is heavily aimed towards achieving software dependability via redundant fault-tolerant mechanisms built into the software artifact to provide error-control in the presence of activated faults. Less emphasis appears to have been placed upon how software dependability can also be promoted through a fault-avoidance approach in the software creation process by incorporating hu- man redundancy and diversity. In this thesis, a process intervention which can potentially improve fault-avoidance is considered. This involves the setting of diverse development goals within important generic computer-based system con- texts in order to increase detection of potentially harmful assumptions which can result in subtle systemic conflicts that can undermine the dependability of the re- sultant artifact during the early development phases of requirements, specification and design. A search theoretic simulation model is progressed and developed to capture some of the important dynamics involved. The eventual outputs of the simulation model indicate that increased fault coverage and sensitivity can be ob- tained through the setting of diverse development goals during the early phases of software development.EPSR

    JUNO Conceptual Design Report

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine the neutrino mass hierarchy using an underground liquid scintillator detector. It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants in Guangdong, China. The experimental hall, spanning more than 50 meters, is under a granite mountain of over 700 m overburden. Within six years of running, the detection of reactor antineutrinos can resolve the neutrino mass hierarchy at a confidence level of 3-4σ\sigma, and determine neutrino oscillation parameters sin2θ12\sin^2\theta_{12}, Δm212\Delta m^2_{21}, and Δmee2|\Delta m^2_{ee}| to an accuracy of better than 1%. The JUNO detector can be also used to study terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard Model. The central detector contains 20,000 tons liquid scintillator with an acrylic sphere of 35 m in diameter. \sim17,000 508-mm diameter PMTs with high quantum efficiency provide \sim75% optical coverage. The current choice of the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of detected photoelectrons per MeV is larger than 1,100 and the energy resolution is expected to be 3% at 1 MeV. The calibration system is designed to deploy multiple sources to cover the entire energy range of reactor antineutrinos, and to achieve a full-volume position coverage inside the detector. The veto system is used for muon detection, muon induced background study and reduction. It consists of a Water Cherenkov detector and a Top Tracker system. The readout system, the detector control system and the offline system insure efficient and stable data acquisition and processing.Comment: 328 pages, 211 figure
    corecore