162 research outputs found

    On Protected Realizations of Quantum Information

    Full text link
    There are two complementary approaches to realizing quantum information so that it is protected from a given set of error operators. Both involve encoding information by means of subsystems. One is initialization-based error protection, which involves a quantum operation that is applied before error events occur. The other is operator quantum error correction, which uses a recovery operation applied after the errors. Together, the two approaches make it clear how quantum information can be stored at all stages of a process involving alternating error and quantum operations. In particular, there is always a subsystem that faithfully represents the desired quantum information. We give a definition of faithful realization of quantum information and show that it always involves subsystems. This justifies the "subsystems principle" for realizing quantum information. In the presence of errors, one can make use of noiseless, (initialization) protectable, or error-correcting subsystems. We give an explicit algorithm for finding optimal noiseless subsystems. Finding optimal protectable or error-correcting subsystems is in general difficult. Verifying that a subsystem is error-correcting involves only linear algebra. We discuss the verification problem for protectable subsystems and reduce it to a simpler version of the problem of finding error-detecting codes.Comment: 17 page

    Zeno effect for quantum computation and control

    Full text link
    It is well known that the quantum Zeno effect can protect specific quantum states from decoherence by using projective measurements. Here we combine the theory of weak measurements with stabilizer quantum error correction and detection codes. We derive rigorous performance bounds which demonstrate that the Zeno effect can be used to protect appropriately encoded arbitrary states to arbitrary accuracy, while at the same time allowing for universal quantum computation or quantum control.Comment: Significant modifications, including a new author. To appear in PR

    Scaling the neutral atom Rydberg gate quantum computer by collective encoding in Holmium atoms

    Full text link
    We discuss a method for scaling a neutral atom Rydberg gate quantum processor to a large number of qubits. Limits are derived showing that the number of qubits that can be directly connected by entangling gates with errors at the 10310^{-3} level using long range Rydberg interactions between sites in an optical lattice, without mechanical motion or swap chains, is about 500 in two dimensions and 7500 in three dimensions. A scaling factor of 60 at a smaller number of sites can be obtained using collective register encoding in the hyperfine ground states of the rare earth atom Holmium. We present a detailed analysis of operation of the 60 qubit register in Holmium. Combining a lattice of multi-qubit ensembles with collective encoding results in a feasible design for a 1000 qubit fully connected quantum processor.Comment: 6 figure

    Loss tolerant linear optical quantum memory by measurement-based quantum computing

    Get PDF
    We give a scheme for loss tolerantly building a linear optical quantum memory which itself is tolerant to qubit loss. We use the encoding recently introduced in Varnava et al 2006 Phys. Rev. Lett. 97 120501, and give a method for efficiently achieving this. The entire approach resides within the 'one-way' model for quantum computing (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 5188–91; Raussendorf et al 2003 Phys. Rev. A 68 022312). Our results suggest that it is possible to build a loss tolerant quantum memory, such that if the requirement is to keep the data stored over arbitrarily long times then this is possible with only polynomially increasing resources and logarithmically increasing individual photon life-times

    The Stability of Quantum Concatenated Code Hamiltonians

    Full text link
    Protecting quantum information from the detrimental effects of decoherence and lack of precise quantum control is a central challenge that must be overcome if a large robust quantum computer is to be constructed. The traditional approach to achieving this is via active quantum error correction using fault-tolerant techniques. An alternative to this approach is to engineer strongly interacting many-body quantum systems that enact the quantum error correction via the natural dynamics of these systems. Here we present a method for achieving this based on the concept of concatenated quantum error correcting codes. We define a class of Hamiltonians whose ground states are concatenated quantum codes and whose energy landscape naturally causes quantum error correction. We analyze these Hamiltonians for robustness and suggest methods for implementing these highly unnatural Hamiltonians.Comment: 18 pages, small corrections and clarification

    Using gene expression profiles from peripheral blood to identify asymptomatic responses to acute respiratory viral infections

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A recent study reported that gene expression profiles from peripheral blood samples of healthy subjects prior to viral inoculation were indistinguishable from profiles of subjects who received viral challenge but remained asymptomatic and uninfected. If true, this implies that the host immune response does not have a molecular signature. Given the high sensitivity of microarray technology, we were intrigued by this result and hypothesize that it was an artifact of data analysis.</p> <p>Findings</p> <p>Using acute respiratory viral challenge microarray data, we developed a molecular signature that for the first time allowed for an accurate differentiation between uninfected subjects prior to viral inoculation and subjects who remained asymptomatic after the viral challenge.</p> <p>Conclusions</p> <p>Our findings suggest that molecular signatures can be used to characterize immune responses to viruses and may improve our understanding of susceptibility to viral infection with possible implications for vaccine development.</p

    Analysis and Computational Dissection of Molecular Signature Multiplicity

    Get PDF
    Molecular signatures are computational or mathematical models created to diagnose disease and other phenotypes and to predict clinical outcomes and response to treatment. It is widely recognized that molecular signatures constitute one of the most important translational and basic science developments enabled by recent high-throughput molecular assays. A perplexing phenomenon that characterizes high-throughput data analysis is the ubiquitous multiplicity of molecular signatures. Multiplicity is a special form of data analysis instability in which different analysis methods used on the same data, or different samples from the same population lead to different but apparently maximally predictive signatures. This phenomenon has far-reaching implications for biological discovery and development of next generation patient diagnostics and personalized treatments. Currently the causes and interpretation of signature multiplicity are unknown, and several, often contradictory, conjectures have been made to explain it. We present a formal characterization of signature multiplicity and a new efficient algorithm that offers theoretical guarantees for extracting the set of maximally predictive and non-redundant signatures independent of distribution. The new algorithm identifies exactly the set of optimal signatures in controlled experiments and yields signatures with significantly better predictivity and reproducibility than previous algorithms in human microarray gene expression datasets. Our results shed light on the causes of signature multiplicity, provide computational tools for studying it empirically and introduce a framework for in silico bioequivalence of this important new class of diagnostic and personalized medicine modalities

    Scalability of quantum computation with addressable optical lattices

    Get PDF
    We make a detailed analysis of error mechanisms, gate fidelity, and scalability of proposals for quantum computation with neutral atoms in addressable (large lattice constant) optical lattices. We have identified possible limits to the size of quantum computations, arising in 3D optical lattices from current limitations on the ability to perform single qubit gates in parallel and in 2D lattices from constraints on laser power. Our results suggest that 3D arrays as large as 100 x 100 x 100 sites (i.e., 106\sim 10^6 qubits) may be achievable, provided two-qubit gates can be performed with sufficiently high precision and degree of parallelizability. Parallelizability of long range interaction-based two-qubit gates is qualitatively compared to that of collisional gates. Different methods of performing single qubit gates are compared, and a lower bound of 1×1051 \times 10^{-5} is determined on the error rate for the error mechanisms affecting 133^{133}Cs in a blue-detuned lattice with Raman transition-based single qubit gates, given reasonable limits on experimental parameters.Comment: 17 pages, 5 figures. Accepted for publication in Physical Review

    Polynomial-time algorithm for simulation of weakly interacting quantum spin systems

    Full text link
    We describe an algorithm that computes the ground state energy and correlation functions for 2-local Hamiltonians in which interactions between qubits are weak compared to single-qubit terms. The running time of the algorithm is polynomial in the number of qubits and the required precision. Specifically, we consider Hamiltonians of the form H=H0+ϵVH=H_0+\epsilon V, where H_0 describes non-interacting qubits, V is a perturbation that involves arbitrary two-qubit interactions on a graph of bounded degree, and ϵ\epsilon is a small parameter. The algorithm works if ϵ|\epsilon| is below a certain threshold value that depends only upon the spectral gap of H_0, the maximal degree of the graph, and the maximal norm of the two-qubit interactions. The main technical ingredient of the algorithm is a generalized Kirkwood-Thomas ansatz for the ground state. The parameters of the ansatz are computed using perturbative expansions in powers of ϵ\epsilon. Our algorithm is closely related to the coupled cluster method used in quantum chemistry.Comment: 27 page

    Fidelity of a Rydberg blockade quantum gate from simulated quantum process tomography

    Full text link
    We present a detailed error analysis of a Rydberg blockade mediated controlled-NOT quantum gate between two neutral atoms as demonstrated recently in Phys. Rev. Lett. 104, 010503 (2010) and Phys. Rev. A 82, 030306 (2010). Numerical solutions of a master equation for the gate dynamics, including all known sources of technical error, are shown to be in good agreement with experiments. The primary sources of gate error are identified and suggestions given for future improvements. We also present numerical simulations of quantum process tomography to find the intrinsic fidelity, neglecting technical errors, of a Rydberg blockade controlled phase gate. The gate fidelity is characterized using trace overlap and trace distance measures. We show that the trace distance is linearly sensitive to errors arising from the finite Rydberg blockade shift and introduce a modified pulse sequence which corrects the linear errors. Our analysis shows that the intrinsic gate error extracted from simulated quantum process tomography can be under 0.002 for specific states of 87^{87}Rb or Cs atoms. The relation between the process fidelity and the gate error probability used in calculations of fault tolerance thresholds is discussed.Comment: revised, small corrections to Table II
    corecore