29,352 research outputs found

    Relative multiplexing for minimizing switching in linear-optical quantum computing

    Get PDF
    Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript, we introduce relative multiplexing (RMUX), a general-purpose optimization which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state in [PRL 115, 020502 (2015)], and we find that RMUX allows a 2.4x increase in loss tolerance for this architecture.Comment: Published version, New Journal of Physics, Volume 19, June 201

    Simulating chemistry efficiently on fault-tolerant quantum computers

    Get PDF
    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. Here we consider methods to make proposed chemical simulation algorithms computationally fast on fault-tolerant quantum computers in the circuit model. Fault tolerance constrains the choice of available gates, so that arbitrary gates required for a simulation algorithm must be constructed from sequences of fundamental operations. We examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay-Kitaev algorithm [C.M. Dawson and M.A. Nielsen, \emph{Quantum Inf. Comput.}, \textbf{6}:81, 2006]. For a given approximation error ϵ\epsilon, arbitrary single-qubit gates can be produced fault-tolerantly and using a limited set of gates in time which is O(logϵ)O(\log \epsilon) or O(loglogϵ)O(\log \log \epsilon); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for Lithium hydride.Comment: 33 pages, 18 figure

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    Lock-free Concurrent Data Structures

    Full text link
    Concurrent data structures are the data sharing side of parallel programming. Data structures give the means to the program to store data, but also provide operations to the program to access and manipulate these data. These operations are implemented through algorithms that have to be efficient. In the sequential setting, data structures are crucially important for the performance of the respective computation. In the parallel programming setting, their importance becomes more crucial because of the increased use of data and resource sharing for utilizing parallelism. The first and main goal of this chapter is to provide a sufficient background and intuition to help the interested reader to navigate in the complex research area of lock-free data structures. The second goal is to offer the programmer familiarity to the subject that will allow her to use truly concurrent methods.Comment: To appear in "Programming Multi-core and Many-core Computing Systems", eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and Distributed Computin
    corecore