11,314 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Current and advanced act control system definition study. Volume 2: Appendices

    Get PDF
    The current status of the Active Controls Technology (ACT) for the advanced subsonic transport project is investigated through analysis of the systems technical data. Control systems technologies under examination include computerized reliability analysis, pitch axis fly by wire actuator, flaperon actuation system design trade study, control law synthesis and analysis, flutter mode control and gust load alleviation analysis, and implementation of alternative ACT systems. Extensive analysis of the computer techniques involved in each system is included

    Internal Consistency of Fault-Tolerant Quantum Error Correction in Light of Rigorous Derivations of the Quantum Markovian Limit

    Full text link
    We critically examine the internal consistency of a set of minimal assumptions entering the theory of fault-tolerant quantum error correction for Markovian noise. These assumptions are: fast gates, a constant supply of fresh and cold ancillas, and a Markovian bath. We point out that these assumptions may not be mutually consistent in light of rigorous formulations of the Markovian approximation. Namely, Markovian dynamics requires either the singular coupling limit (high temperature), or the weak coupling limit (weak system-bath interaction). The former is incompatible with the assumption of a constant and fresh supply of cold ancillas, while the latter is inconsistent with fast gates. We discuss ways to resolve these inconsistencies. As part of our discussion we derive, in the weak coupling limit, a new master equation for a system subject to periodic driving.Comment: 19 pages. v2: Significantly expanded version. New title. Includes a debate section in response to comments on the previous version, many of which appeared here http://dabacon.org/pontiff/?p=959 and here http://dabacon.org/pontiff/?p=1028. Contains a new derivation of the Markovian master equation with periodic drivin

    Benchmarking high fidelity single-shot readout of semiconductor qubits

    Full text link
    Determination of qubit initialisation and measurement fidelity is important for the overall performance of a quantum computer. However, the method by which it is calculated in semiconductor qubits varies between experiments. In this paper we present a full theoretical analysis of electronic single-shot readout and describe critical parameters to achieve high fidelity readout. In particular, we derive a model for energy selective state readout based on a charge detector response and examine how to optimise the fidelity by choosing correct experimental parameters. Although we focus on single electron spin readout, the theory presented can be applied to other electronic readout techniques in semiconductors that use a reservoir.Comment: 19 pages, 8 figure

    Neural network agent playing spin Hamiltonian games on a quantum computer

    Full text link
    Quantum computing is expected to provide new promising approaches for solving the most challenging problems in material science, communication, search, machine learning and other domains. However, due to the decoherence and gate imperfection errors modern quantum computer systems are characterized by a very complex, dynamical, uncertain and fluctuating computational environment. We develop an autonomous agent effectively interacting with such an environment to solve magnetism problems. By using the reinforcement learning the agent is trained to find the best-possible approximation of a spin Hamiltonian ground state from self-play on quantum devices. We show that the agent can learn the entanglement to imitate the ground state of the quantum spin dimer. The experiments were conducted on quantum computers provided by IBM. To compensate the decoherence we use local spin correction procedure derived from a general sum rule for spin-spin correlation functions of a quantum system with even number of antiferromagnetically-coupled spins in the ground state. Our study paves a way to create a new family of the neural network eigensolvers for quantum computers.Comment: Local spin correction procedure was used to compensate real device errors; comparison with variational approach was adde

    Measurement of fault latency in a digital avionic miniprocessor

    Get PDF
    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are presented. The failure detection coverage of comparison-monitoring and a typical avionics CPU self-test program was determined. The specific tasks and experiments included: (1) inject randomly selected gate-level and pin-level faults and emulate six software programs using comparison-monitoring to detect the faults; (2) based upon the derived empirical data develop and validate a model of fault latency that will forecast a software program's detecting ability; (3) given a typical avionics self-test program, inject randomly selected faults at both the gate-level and pin-level and determine the proportion of faults detected; (4) determine why faults were undetected; (5) recommend how the emulation can be extended to multiprocessor systems such as SIFT; and (6) determine the proportion of faults detected by a uniprocessor BIT (built-in-test) irrespective of self-test

    Semiconductor technology program. Progress briefs

    Get PDF
    The current status of NBS work on measurement technology for semiconductor materials, process control, and devices is reported. Results of both in-house and contract research are covered. Highlighted activities include modeling of diffusion processes, analysis of model spreading resistance data, and studies of resonance ionization spectroscopy, resistivity-dopant density relationships in p-type silicon, deep level measurements, photoresist sensitometry, random fault measurements, power MOSFET thermal characteristics, power transistor switching characteristics, and gross leak testing. New and selected on-going projects are described. Compilations of recent publications and publications in press are included

    Evaluating the Impact of Transition Delay Faults in GPUs

    Get PDF
    This work proposes a method to evaluate the effects of transition delay faults (TDFs) in GPUs. The method takes advantage of low-level (i.e., RT- and gate-level) descriptions of a GPU to evaluate the effects of transition delay faults in GPUs, thus paving the way to model them as errors at the instruction level, which can contribute to the resilience evaluations of large and complex applications. For this purpose, the paper describes a setup that efficiently simulates transition delay faults. The results allow us to compare their effects with stuck-at-faults (SAFs) and perform an error classification correlating these faults as instruction-level errors. We resort to an open-source model of a GPU (FlexGripPlus) and a set of workloads for the evaluation. The experimental results show that, according to the application code style, TDFs can compromise the operation of an application from 1.3 to 11.63 times less than SAFs. Moreover, for all the analyzed applications, a considerable percentage of sites of the Integer (5.4% to 51.7%), Floating-point (0.9% to 2.4%), and Special Function unit (17.0% to 35.6%) can become critical if affected by a SAF or TDF. Finally, a correlation between the fault's impact from both fault models and the instructions executed by the applications reveals that SAFs in the functional units are more prone (from 45.6% to 60.4%) to propagate errors at the software level for all units than TDFs (from 17.9% to 58.8%)
    corecore