22,372 research outputs found

    Fault Tolerant Adaptive Parallel and Distributed Simulation through Functional Replication

    Full text link
    This paper presents FT-GAIA, a software-based fault-tolerant parallel and distributed simulation middleware. FT-GAIA has being designed to reliably handle Parallel And Distributed Simulation (PADS) models, which are needed to properly simulate and analyze complex systems arising in any kind of scientific or engineering field. PADS takes advantage of multiple execution units run in multicore processors, cluster of workstations or HPC systems. However, large computing systems, such as HPC systems that include hundreds of thousands of computing nodes, have to handle frequent failures of some components. To cope with this issue, FT-GAIA transparently replicates simulation entities and distributes them on multiple execution nodes. This allows the simulation to tolerate crash-failures of computing nodes. Moreover, FT-GAIA offers some protection against Byzantine failures, since interaction messages among the simulated entities are replicated as well, so that the receiving entity can identify and discard corrupted messages. Results from an analytical model and from an experimental evaluation show that FT-GAIA provides a high degree of fault tolerance, at the cost of a moderate increase in the computational load of the execution units.Comment: arXiv admin note: substantial text overlap with arXiv:1606.0731

    What is a quantum computer, and how do we build one?

    Full text link
    The DiVincenzo criteria for implementing a quantum computer have been seminal in focussing both experimental and theoretical research in quantum information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. The question is therefore what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that according to this definition a device is a quantum computer if it obeys the following four criteria: Any quantum computer must (1) have a quantum memory; (2) facilitate a controlled quantum evolution of the quantum memory; (3) include a method for cooling the quantum memory; and (4) provide a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault-tolerantly. We discuss various existing quantum computing paradigms, and how they fit within this framework. Finally, we lay out a roadmap for selecting an avenue towards building a quantum computer. This is summarized in a decision tree intended to help experimentalists determine the most natural paradigm given a particular physical implementation

    Resource Requirements for Fault-Tolerant Quantum Simulation: The Transverse Ising Model Ground State

    Full text link
    We estimate the resource requirements, the total number of physical qubits and computational time, required to compute the ground state energy of a 1-D quantum Transverse Ising Model (TIM) of N spin-1/2 particles, as a function of the system size and the numerical precision. This estimate is based on analyzing the impact of fault-tolerant quantum error correction in the context of the Quantum Logic Array (QLA) architecture. Our results show that due to the exponential scaling of the computational time with the desired precision of the energy, significant amount of error correciton is required to implement the TIM problem. Comparison of our results to the resource requirements for a fault-tolerant implementation of Shor's quantum factoring algorithm reveals that the required logical qubit reliability is similar for both the TIM problem and the factoring problem.Comment: 19 pages, 8 figure

    Gates for the Kane Quantum Computer in the Presence of Dephasing

    Get PDF
    In this paper we investigate the effect of dephasing on proposed quantum gates for the solid-state Kane quantum computing architecture. Using a simple model of the decoherence, we find that the typical error in a CNOT gate is 8.3×1058.3 \times 10^{-5}. We also compute the fidelities of Z, X, Swap, and Controlled Z operations under a variety of dephasing rates. We show that these numerical results are comparable with the error threshold required for fault tolerant quantum computation.Comment: 9 pages, 9 figure

    Holonomic quantum computing in symmetry-protected ground states of spin chains

    Full text link
    While solid-state devices offer naturally reliable hardware for modern classical computers, thus far quantum information processors resemble vacuum tube computers in being neither reliable nor scalable. Strongly correlated many body states stabilized in topologically ordered matter offer the possibility of naturally fault tolerant computing, but are both challenging to engineer and coherently control and cannot be easily adapted to different physical platforms. We propose an architecture which achieves some of the robustness properties of topological models but with a drastically simpler construction. Quantum information is stored in the symmetry-protected degenerate ground states of spin-1 chains, while quantum gates are performed by adiabatic non-Abelian holonomies using only single-site fields and nearest-neighbor couplings. Gate operations respect the symmetry, and so inherit some protection from noise and disorder from the symmetry-protected ground states.Comment: 19 pages, 4 figures. v2: published versio

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified
    corecore