27,450 research outputs found

    Fault-tolerant quantum computation

    Get PDF
    Recently, it was realized that use of the properties of quantum mechanics might speed up certain computations dramatically. Interest in quantum computation has since been growing. One of the main difficulties of realizing quantum computation is that decoherence tends to destroy the information in a superposition of states in a quantum computer, thus making long computations impossible. A futher difficulty is that inaccuracies in quantum state transformations throughout the computation accumulate, rendering the output of long computations unreliable. It was previously known that a quantum circuit with t gates could tolerate O(1/t) amounts of inaccuracy and decoherence per gate. We show, for any quantum computation with t gates, how to build a polynomial size quantum circuit that can tolerate O(1/(log t)^c) amounts of inaccuracy and decoherence per gate, for some constant c. We do this by showing how to compute using quantum error correcting codes. These codes were previously known to provide resistance to errors while storing and transmitting quantum data.Comment: Latex, 11 pages, no figures, in 37th Symposium on Foundations of Computing, IEEE Computer Society Press, 1996, pp. 56-6

    Resilience in Numerical Methods: A Position on Fault Models and Methodologies

    Full text link
    Future extreme-scale computer systems may expose silent data corruption (SDC) to applications, in order to save energy or increase performance. However, resilience research struggles to come up with useful abstract programming models for reasoning about SDC. Existing work randomly flips bits in running applications, but this only shows average-case behavior for a low-level, artificial hardware model. Algorithm developers need to understand worst-case behavior with the higher-level data types they actually use, in order to make their algorithms more resilient. Also, we know so little about how SDC may manifest in future hardware, that it seems premature to draw conclusions about the average case. We argue instead that numerical algorithms can benefit from a numerical unreliability fault model, where faults manifest as unbounded perturbations to floating-point data. Algorithms can use inexpensive "sanity" checks that bound or exclude error in the results of computations. Given a selective reliability programming model that requires reliability only when and where needed, such checks can make algorithms reliable despite unbounded faults. Sanity checks, and in general a healthy skepticism about the correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape

    An occam Style Communications System for UNIX Networks

    Get PDF
    This document describes the design of a communications system which provides occam style communications primitives under a Unix environment, using TCP/IP protocols, and any number of other protocols deemed suitable as underlying transport layers. The system will integrate with a low overhead scheduler/kernel without incurring significant costs to the execution of processes within the run time environment. A survey of relevant occam and occam3 features and related research is followed by a look at the Unix and TCP/IP facilities which determine our working constraints, and a description of the T9000 transputer's Virtual Channel Processor, which was instrumental in our formulation. Drawing from the information presented here, a design for the communications system is subsequently proposed. Finally, a preliminary investigation of methods for lightweight access control to shared resources in an environment which does not provide support for critical sections, semaphores, or busy waiting, is made. This is presented with relevance to mutual exclusion problems which arise within the proposed design. Future directions for the evolution of this project are discussed in conclusion

    The Raincore API for clusters of networking elements

    Get PDF
    Clustering technology offers a way to increase overall reliability and performance of Internet information flow by strengthening one link in the chain without adding others. We have implemented this technology in a distributed computing architecture for network elements. The architecture, called Raincore, originated in the Reliable Array of Independent Nodes, or RAIN, research collaboration between the California Institute of Technology and the US National Aeronautics and Space Agency's Jet Propulsion Laboratory. The RAIN project focused on developing high-performance, fault-tolerant, portable clustering technology for spaceborne computing . The technology that emerged from this project became the basis for a spinoff company, Rainfinity, which has the exclusive intellectual property rights to the RAIN technology. The authors describe the Raincore conceptual architecture and distributed services, which are designed to make it easy for developers to port their applications to run on top of a cluster of networking elements. We include two applications: a Web server prototype that was part of the original RAIN research project and a commercial firewall cluster product from Rainfinity

    Noise threshold for universality of 2-input gates

    Get PDF
    Evans and Pippenger showed in 1998 that noisy gates with 2 inputs are universal for arbitrary computation (i.e. can compute any function with bounded error), if all gates fail independently with probability epsilon and epsilon<theta, where theta is roughly 8.856%. We show that formulas built from gates with 2 inputs, in which each gate fails with probability at least theta cannot be universal. Hence, there is a threshold on the tolerable noise for formulas with 2-input gates and it is theta. We conjecture that the same threshold also holds for circuits.Comment: International Symposium on Information Theory, 2007, minor corrections in v
    • …
    corecore