1,440 research outputs found

    Quantum Query-To-Communication Simulation Needs a Logarithmic Overhead

    Get PDF
    Buhrman, Cleve and Wigderson (STOC'98) observed that for every Boolean function f:{1,1}n{1,1}f : \{-1, 1\}^n \to \{-1, 1\} and :{1,1}2{1,1}\bullet : \{-1, 1\}^2 \to \{-1, 1\} the two-party bounded-error quantum communication complexity of (f)(f \circ \bullet) is O(Q(f)logn)O(Q(f) \log n), where Q(f)Q(f) is the bounded-error quantum query complexity of ff. Note that the bounded-error randomized communication complexity of (f)(f \circ \bullet) is bounded by O(R(f))O(R(f)), where R(f)R(f) denotes the bounded-error randomized query complexity of ff. Thus, the BCW simulation has an extra O(logn)O(\log n) factor appearing that is absent in classical simulation. A natural question is if this factor can be avoided. H{\o}yer and de Wolf (STACS'02) showed that for the Set-Disjointness function, this can be reduced to clognc^{\log^* n} for some constant cc, and subsequently Aaronson and Ambainis (FOCS'03) showed that this factor can be made a constant. That is, the quantum communication complexity of the Set-Disjointness function (which is NORn\mathsf{NOR}_n \circ \wedge) is O(Q(NORn))O(Q(\mathsf{NOR}_n)). Perhaps somewhat surprisingly, we show that when = \bullet = \oplus, then the extra logn\log n factor in the BCW simulation is unavoidable. In other words, we exhibit a total function F:{1,1}n{1,1}F : \{-1, 1\}^n \to \{-1, 1\} such that Qcc(F)=Θ(Q(F)logn)Q^{cc}(F \circ \oplus) = \Theta(Q(F) \log n). To the best of our knowledge, it was not even known prior to this work whether there existed a total function FF and 2-bit function \bullet, such that Qcc(F)=ω(Q(F))Q^{cc}(F \circ \bullet) = \omega(Q(F))

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress

    Zero-Knowledge Proofs of Proximity

    Get PDF
    Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language). In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input. Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist. Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    Parallel Quantum Algorithm for Hamiltonian Simulation

    Full text link
    We study how parallelism can speed up quantum simulation. A parallel quantum algorithm is proposed for simulating the dynamics of a large class of Hamiltonians with good sparse structures, called uniform-structured Hamiltonians, including various Hamiltonians of practical interest like local Hamiltonians and Pauli sums. Given the oracle access to the target sparse Hamiltonian, in both query and gate complexity, the running time of our parallel quantum simulation algorithm measured by the quantum circuit depth has a doubly (poly-)logarithmic dependence polyloglog(1/ϵ)\operatorname{polylog}\log(1/\epsilon) on the simulation precision ϵ\epsilon. This presents an exponential improvement over the dependence polylog(1/ϵ)\operatorname{polylog}(1/\epsilon) of previous optimal sparse Hamiltonian simulation algorithm without parallelism. To obtain this result, we introduce a novel notion of parallel quantum walk, based on Childs' quantum walk. The target evolution unitary is approximated by a truncated Taylor series, which is obtained by combining these quantum walks in a parallel way. A lower bound Ω(loglog(1/ϵ))\Omega(\log \log (1/\epsilon)) is established, showing that the ϵ\epsilon-dependence of the gate depth achieved in this work cannot be significantly improved. Our algorithm is applied to simulating three physical models: the Heisenberg model, the Sachdev-Ye-Kitaev model and a quantum chemistry model in second quantization. By explicitly calculating the gate complexity for implementing the oracles, we show that on all these models, the total gate depth of our algorithm has a polyloglog(1/ϵ)\operatorname{polylog}\log(1/\epsilon) dependence in the parallel setting.Comment: Minor revision. 55 pages, 6 figures, 1 tabl
    corecore