27,294 research outputs found

    Efficient computation of middle levels Gray codes

    Get PDF
    For any integer n1n\geq 1 a middle levels Gray code is a cyclic listing of all bitstrings of length 2n+12n+1 that have either nn or n+1n+1 entries equal to 1 such that any two consecutive bitstrings in the list differ in exactly one bit. The question whether such a Gray code exists for every n1n\geq 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. M\"utze. Proof of the middle levels conjecture. Proc. London Math. Soc., 112(4):677--713, 2016]. In this work we provide the first efficient algorithm to compute a middle levels Gray code. For a given bitstring, our algorithm computes the next \ell bitstrings in the Gray code in time O(n(1+n))\mathcal{O}(n\ell(1+\frac{n}{\ell})), which is O(n)\mathcal{O}(n) on average per bitstring provided that =Ω(n)\ell=\Omega(n)

    A constant-time algorithm for middle levels Gray codes

    Get PDF
    For any integer n1n\geq 1 a middle levels Gray code is a cyclic listing of all nn-element and (n+1)(n+1)-element subsets of {1,2,,2n+1}\{1,2,\ldots,2n+1\} such that any two consecutive subsets differ in adding or removing a single element. The question whether such a Gray code exists for any n1n\geq 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. M\"utze. Proof of the middle levels conjecture. Proc. London Math. Soc., 112(4):677--713, 2016]. In a follow-up paper [T. M\"utze and J. Nummenpalo. An efficient algorithm for computing a middle levels Gray code. To appear in ACM Transactions on Algorithms, 2018] this existence proof was turned into an algorithm that computes each new set in the Gray code in time O(n)\mathcal{O}(n) on average. In this work we present an algorithm for computing a middle levels Gray code in optimal time and space: each new set is generated in time O(1)\mathcal{O}(1) on average, and the required space is O(n)\mathcal{O}(n)

    Quantum Computing with Very Noisy Devices

    Full text link
    In theory, quantum computers can efficiently simulate quantum physics, factor large numbers and estimate integrals, thus solving otherwise intractable computational problems. In practice, quantum computers must operate with noisy devices called ``gates'' that tend to destroy the fragile quantum states needed for computation. The goal of fault-tolerant quantum computing is to compute accurately even when gates have a high probability of error each time they are used. Here we give evidence that accurate quantum computing is possible with error probabilities above 3% per gate, which is significantly higher than what was previously thought possible. However, the resources required for computing at such high error probabilities are excessive. Fortunately, they decrease rapidly with decreasing error probabilities. If we had quantum resources comparable to the considerable resources available in today's digital computers, we could implement non-trivial quantum computations at error probabilities as high as 1% per gate.Comment: 47 page

    On connectivity-dependent resource requirements for digital quantum simulation of dd-level particles

    Full text link
    A primary objective of quantum computation is to efficiently simulate quantum physics. Scientifically and technologically important quantum Hamiltonians include those with spin-ss, vibrational, photonic, and other bosonic degrees of freedom, i.e. problems composed of or approximated by dd-level particles (qudits). Recently, several methods for encoding these systems into a set of qubits have been introduced, where each encoding's efficiency was studied in terms of qubit and gate counts. Here, we build on previous results by including effects of hardware connectivity. To study the number of SWAP gates required to Trotterize commonly used quantum operators, we use both analytical arguments and automatic tools that optimize the schedule in multiple stages. We study the unary (or one-hot), Gray, standard binary, and block unary encodings, with three connectivities: linear array, ladder array, and square grid. Among other trends, we find that while the ladder array leads to substantial efficiencies over the linear array, the advantage of the square over the ladder array is less pronounced. These results are applicable in hardware co-design and in choosing efficient qudit encodings for a given set of near-term quantum hardware. Additionally, this work may be relevant to the scheduling of other quantum algorithms for which matrix exponentiation is a subroutine.Comment: Accepted to QCE20 (IEEE Quantum Week). Corrected erroneous circuits in Figure

    Random access quantum information processors

    Full text link
    Qubit connectivity is an important property of a quantum processor, with an ideal processor having random access -- the ability of arbitrary qubit pairs to interact directly. Here, we implement a random access superconducting quantum information processor, demonstrating universal operations on a nine-bit quantum memory, with a single transmon serving as the central processor. The quantum memory uses the eigenmodes of a linear array of coupled superconducting resonators. The memory bits are superpositions of vacuum and single-photon states, controlled by a single superconducting transmon coupled to the edge of the array. We selectively stimulate single-photon vacuum Rabi oscillations between the transmon and individual eigenmodes through parametric flux modulation of the transmon frequency, producing sidebands resonant with the modes. Utilizing these oscillations for state transfer, we perform a universal set of single- and two-qubit gates between arbitrary pairs of modes, using only the charge and flux bias of the transmon. Further, we prepare multimode entangled Bell and GHZ states of arbitrary modes. The fast and flexible control, achieved with efficient use of cryogenic resources and control electronics, in a scalable architecture compatible with state-of-the-art quantum memories is promising for quantum computation and simulation.Comment: 7 pages, 5 figures, supplementary information ancillary file, 21 page

    Generalized Gray Codes for Local Rank Modulation

    Get PDF
    We consider the local rank-modulation scheme in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. Local rank-modulation is a generalization of the rank-modulation scheme, which has been recently suggested as a way of storing information in flash memory. We study Gray codes for the local rank-modulation scheme in order to simulate conventional multi-level flash cells while retaining the benefits of rank modulation. Unlike the limited scope of previous works, we consider code constructions for the entire range of parameters including the code length, sliding window size, and overlap between adjacent windows. We show our constructed codes have asymptotically-optimal rate. We also provide efficient encoding, decoding, and next-state algorithms.Comment: 7 pages, 1 figure, shorter version was submitted to ISIT 201

    Autoencoding the Retrieval Relevance of Medical Images

    Full text link
    Content-based image retrieval (CBIR) of medical images is a crucial task that can contribute to a more reliable diagnosis if applied to big data. Recent advances in feature extraction and classification have enormously improved CBIR results for digital images. However, considering the increasing accessibility of big data in medical imaging, we are still in need of reducing both memory requirements and computational expenses of image retrieval systems. This work proposes to exclude the features of image blocks that exhibit a low encoding error when learned by a n/p/nn/p/n autoencoder (p ⁣< ⁣np\!<\!n). We examine the histogram of autoendcoding errors of image blocks for each image class to facilitate the decision which image regions, or roughly what percentage of an image perhaps, shall be declared relevant for the retrieval task. This leads to reduction of feature dimensionality and speeds up the retrieval process. To validate the proposed scheme, we employ local binary patterns (LBP) and support vector machines (SVM) which are both well-established approaches in CBIR research community. As well, we use IRMA dataset with 14,410 x-ray images as test data. The results show that the dimensionality of annotated feature vectors can be reduced by up to 50% resulting in speedups greater than 27% at expense of less than 1% decrease in the accuracy of retrieval when validating the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015, Orleans, Franc
    corecore