169 research outputs found

    Quantum Divide and Compute: Hardware Demonstrations and Noisy Simulations

    Full text link
    Noisy, intermediate-scale quantum computers come with intrinsic limitations in terms of the number of qubits (circuit "width") and decoherence time (circuit "depth") they can have. Here, for the first time, we demonstrate a recently introduced method that breaks a circuit into smaller subcircuits or fragments, and thus makes it possible to run circuits that are either too wide or too deep for a given quantum processor. We investigate the behavior of the method on one of IBM's 20-qubit superconducting quantum processors with various numbers of qubits and fragments. We build noise models that capture decoherence, readout error, and gate imperfections for this particular processor. We then carry out noisy simulations of the method in order to account for the observed experimental results. We find an agreement within 20% between the experimental and the simulated success probabilities, and we observe that recombining noisy fragments yields overall results that can outperform the results without fragmentation.Comment: Accepted in ISVLSI 202

    Blueprint for fault-tolerant quantum computation with Rydberg atoms

    Get PDF
    We present a blueprint for building a fault-tolerant universal quantum computer with Rydberg atoms. Our scheme, which is based on the surface code, uses individually addressable, optically trapped atoms as qubits and exploits electromagnetically induced transparency to perform the multiqubit gates required for error correction and computation. We discuss the advantages and challenges of using Rydberg atoms to build such a quantum computer, and we perform error correction simulations to obtain an error threshold for our scheme. Our findings suggest that Rydberg atoms are a promising candidate for quantum computation, but gate fidelities need to improve before fault-tolerant universal quantum computation can be achieved

    Comparing the Overhead of Topological and Concatenated Quantum Error Correction

    Full text link
    This work compares the overhead of quantum error correction with concatenated and topological quantum error-correcting codes. To perform a numerical analysis, we use the Quantum Resource Estimator Toolbox (QuRE) that we recently developed. We use QuRE to estimate the number of qubits, quantum gates, and amount of time needed to factor a 1024-bit number on several candidate quantum technologies that differ in their clock speed and reliability. We make several interesting observations. First, topological quantum error correction requires fewer resources when physical gate error rates are high, white concatenated codes have smaller overhead for physical gate error rates below approximately 10E-7. Consequently, we show that different error-correcting codes should be chosen for two of the studied physical quantum technologies - ion traps and superconducting qubits. Second, we observe that the composition of the elementary gate types occurring in a typical logical circuit, a fault-tolerant circuit protected by the surface code, and a fault-tolerant circuit protected by a concatenated code all differ. This also suggests that choosing the most appropriate error correction technique depends on the ability of the future technology to perform specific gates efficiently

    Implementation of Provably Stable MaxNet

    Get PDF
    MaxNet TCP is a congestion control protocol that uses explicit multi-bit signalling from routers to achieve desirable properties such as high throughput and low latency. In this paper we present an implementation of an extended version of MaxNet. Our contributions are threefold. First, we extend the original algorithm to give both provable stability and rate fairness. Second, we introduce the MaxStart algorithm which allows new MaxNet connections to reach their fair rates quickly. Third, we provide a Linux kernel implementation of the protocol. With no overhead but 24-bit price signals, our implementation scales from 32 bit/s to 1 peta-bit/s with a 0.001% rate accuracy. We confirm the theoretically predicted properties by performing a range of experiments at speeds up to 1 Gbit/sec and delays up to 180 ms on the WAN-in-Lab facility

    Desenvolvimento de metodologias analíticas para determinação de fármacos em fluidos biológicos e amostras ambientais por cromatografia líquida e gasosa

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Ciências Físicas e Matemáticas. Programa de Pós-Graduação em Química.Neste trabalho está apresentada a análise de fluído biológico e matriz ambiental para a determinação de fármacos, avaliando etapas distintas das metodologias analíticas. Na determinação de paracetamol e hidroclorotiazida em urina, buscou-se enfatizar o método de quantificação. A determinação foi realizada pelas técnicas convencionais de padronização por padrão externo e adição do analito e pelo método proposto de padronização por adição. A técnica proposta não apresentou diferença quando avaliada estatisticamente. Para a determinação de fármacos em matriz ambiental, foi dada ênfase ao desenvolvimento e otimização de metodologias, utilizando a extração em fase sólida (SPE) e microextração em fase sólida (SPME). Primeiramente, foi otimizada univariavelmente e validada metodologia visando à extração de paracetamol, AAS, diclofenaco e carbamazepina de amostras de água por SPE e cromatografia líquida. A metodologia otimizada apresentou boa linearidade e valores de recuperação aceitáveis. Enfim, uma metodologia para a determinação simultânea de fármacos ácidos (ibuprofeno, fenoprofeno e diclofenaco) e básicos (diazepam e loratadina) em água através da SPME e cromatografia gasosa foi desenvolvida. Aplicou-se as ferramentas multivariadas para otimização. A metodologia proposta foi comparada a metodologias onde cada classe de fármaco era extraída separadamente e mostrou-se muito eficiente
    corecore