54 research outputs found

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Affordable techniques for dependable microprocessor design

    Get PDF
    As high computing power is available at an affordable cost, we rely on microprocessor-based systems for much greater variety of applications. This dependence indicates that a processor failure could have more diverse impacts on our daily lives. Therefore, dependability is becoming an increasingly important quality measure of microprocessors.;Temporary hardware malfunctions caused by unstable environmental conditions can lead the processor to an incorrect state. This is referred to as a transient error or soft error. Studies have shown that soft errors are the major source of system failures. This dissertation characterizes the soft error behavior on microprocessors and presents new microarchitectural approaches that can realize high dependability with low overhead.;Our fault injection studies using RISC processors have demonstrated that different functional blocks of the processor have distinct susceptibilities to soft errors. The error susceptibility information must be reflected in devising fault tolerance schemes for cost-sensitive applications. Considering the common use of on-chip caches in modern processors, we investigated area-efficient protection schemes for memory arrays. The idea of caching redundant information was exploited to optimize resource utilization for increased dependability. We also developed a mechanism to verify the integrity of data transfer from lower level memories to the primary caches. The results of this study show that by exploiting bus idle cycles and the information redundancy, an almost complete check for the initial memory data transfer is possible without incurring a performance penalty.;For protecting the processor\u27s control logic, which usually remains unprotected, we propose a low-cost reliability enhancement strategy. We classified control logic signals into static and dynamic control depending on their changeability, and applied various techniques including commit-time checking, signature caching, component-level duplication, and control flow monitoring. Our schemes can achieve more than 99% coverage with a very small hardware addition. Finally, a virtual duplex architecture for superscalar processors is presented. In this system-level approach, the processor pipeline is backed up by a partially replicated pipeline. The replication-based checker minimizes the design and verification overheads. For a large-scale superscalar processor, the proposed architecture can bring 61.4% reduction in die area while sustaining the maximum performance

    A development and assurance process for Medical Application Platform apps

    Get PDF
    Doctor of PhilosophyDepartment of Computing and Information SciencesJohn M. HatcliffMedical devices have traditionally been designed, built, and certified for use as monolithic units. A new vision of "Medical Application Platforms" (MAPs) is emerging that would enable compositional medical systems to be instantiated at the point of care from a collection of trusted components. This work details efforts to create a development environment for applications that run on these MAPs. The first contribution of this effort is a language and code generator that can be used to model and implement MAP applications. The language is a subset of the Architecture, Analysis and Design Language (AADL) that has been tailored to the platform-based environment of MAPs. Accompanying the language is software tooling that provides automated code generation targeting an existing MAP implementation. The second contribution is a new hazard analysis process called the Systematic Analysis of Faults and Errors (SAFE). SAFE is a modified version of the previously-existing System Theoretic Process Analysis (STPA), that has been made more rigorous, partially compositional, and easier. SAFE is not a replacement for STPA, however, rather it more effectively analyzes the hardware- and software-based elements of a full safety-critical system. SAFE has both manual and tool-assisted formats; the latter consists of AADL annotations that are designed to be used with the language subset from the first contribution. An automated report generator has also been implemented to accelerate the hazard analysis process. Third, this work examines how, independent of its place in the system hierarchy or the precise configuration of its environment, a component may contribute to the safety (or lack thereof) of an entire system. Based on this, we propose a reference model which generalizes notions of harm and the role of components in their environment so that they can be applied to components either in isolation or as part of a complete system. Connections between these formalisms and existing approaches for system composition and fault propagation are also established. This dissertation presents these contributions along with a review of relevant literature, evaluation of the SAFE process, and concludes with discussion of potential future work

    Advancing automation and robotics technology for the Space Station and for the US economy, volume 2

    Get PDF
    In response to Public Law 98-371, dated July 18, 1984, the NASA Advanced Technology Advisory Committee has studied automation and robotics for use in the Space Station. The Technical Report, Volume 2, provides background information on automation and robotics technologies and their potential and documents: the relevant aspects of Space Station design; representative examples of automation and robotics; applications; the state of the technology and advances needed; and considerations for technology transfer to U.S. industry and for space commercialization

    A Comprehensive Fault Model for Concurrent Error Detection in MOS Circuits

    Get PDF
    Naval Electronics Sys. Comm. and Office of Naval Research / N00039-80-C-0556Ope

    A holistic approach to network security in OGSA-based grid systems

    Get PDF
    Grid computing technologies facilitate complex scientific collaborations between globally dispersed parties, which make use of heterogeneous technologies and computing systems. However, in recent years the commercial sector has developed a growing interest in Grid technologies. Prominent Grid researchers have predicted Grids will grow into the commercial mainstream, even though its origins were in scientific research. This is much the same way as the Internet started as a vehicle for research collaboration between universities and government institutions, and grew into a technology with large commercial applications. Grids facilitate complex trust relationships between globally dispersed business partners, research groups, and non-profit organizations. Almost any dispersed “virtual organization” willing to share computing resources can make use of Grid technologies. Grid computing facilitates the networking of shared services; the inter-connection of a potentially unlimited number of computing resources within a “Grid” is possible. Grid technologies leverage a range of open standards and technologies to provide interoperability between heterogeneous computing systems. Newer Grids build on key capabilities of Web-Service technologies to provide easy and dynamic publishing and discovery of Grid resources. Due to the inter-organisational nature of Grid systems, there is a need to provide adequate security to Grid users and to Grid resources. This research proposes a framework, using a specific brokered pattern, which addresses several common Grid security challenges, which include: Providing secure and consistent cross-site Authentication and Authorization; Single-sign on capabilities to Grid users; Abstract iii; Underlying platform and runtime security, and; Grid network communications and messaging security. These Grid security challenges can be viewed as comprising two (proposed) logical layers of a Grid. These layers are: a Common Grid Layer (higher level Grid interactions), and a Local Resource Layer (Lower level technology security concerns). This research is concerned with providing a generic and holistic security framework to secure both layers. This research makes extensive use of STRIDE - an acronym for Microsoft approach to addressing security threats - as part of a holistic Grid security framework. STRIDE and key Grid related standards, such as Open Grid Service Architecture (OGSA), Web-Service Resource Framework (WS-RF), and the Globus Toolkit are used to formulate the proposed framework
    corecore