1,440 research outputs found
Quantum Query-To-Communication Simulation Needs a Logarithmic Overhead
Buhrman, Cleve and Wigderson (STOC'98) observed that for every Boolean
function and the two-party bounded-error quantum communication complexity of is , where is the bounded-error quantum query
complexity of . Note that the bounded-error randomized communication
complexity of is bounded by , where denotes
the bounded-error randomized query complexity of . Thus, the BCW simulation
has an extra factor appearing that is absent in classical
simulation. A natural question is if this factor can be avoided. H{\o}yer and
de Wolf (STACS'02) showed that for the Set-Disjointness function, this can be
reduced to for some constant , and subsequently Aaronson and
Ambainis (FOCS'03) showed that this factor can be made a constant. That is, the
quantum communication complexity of the Set-Disjointness function (which is
) is .
Perhaps somewhat surprisingly, we show that when , then
the extra factor in the BCW simulation is unavoidable. In other words,
we exhibit a total function such that .
To the best of our knowledge, it was not even known prior to this work
whether there existed a total function and 2-bit function , such
that
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Algebraic and Combinatorial Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress
Zero-Knowledge Proofs of Proximity
Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language).
In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input.
Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length):
- Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester.
- Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier.
- Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist.
Lastly, we also consider the computational setting where we show that:
- Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier.
- Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier
Optimized Surface Code Communication in Superconducting Quantum Computers
Quantum computing (QC) is at the cusp of a revolution. Machines with 100
quantum bits (qubits) are anticipated to be operational by 2020
[googlemachine,gambetta2015building], and several-hundred-qubit machines are
around the corner. Machines of this scale have the capacity to demonstrate
quantum supremacy, the tipping point where QC is faster than the fastest
classical alternative for a particular problem. Because error correction
techniques will be central to QC and will be the most expensive component of
quantum computation, choosing the lowest-overhead error correction scheme is
critical to overall QC success. This paper evaluates two established quantum
error correction codes---planar and double-defect surface codes---using a set
of compilation, scheduling and network simulation tools. In considering
scalable methods for optimizing both codes, we do so in the context of a full
microarchitectural and compiler analysis. Contrary to previous predictions, we
find that the simpler planar codes are sometimes more favorable for
implementation on superconducting quantum computers, especially under
conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium
on Microarchitectur
Parallel Quantum Algorithm for Hamiltonian Simulation
We study how parallelism can speed up quantum simulation. A parallel quantum
algorithm is proposed for simulating the dynamics of a large class of
Hamiltonians with good sparse structures, called uniform-structured
Hamiltonians, including various Hamiltonians of practical interest like local
Hamiltonians and Pauli sums. Given the oracle access to the target sparse
Hamiltonian, in both query and gate complexity, the running time of our
parallel quantum simulation algorithm measured by the quantum circuit depth has
a doubly (poly-)logarithmic dependence
on the simulation precision . This presents an exponential
improvement over the dependence of
previous optimal sparse Hamiltonian simulation algorithm without parallelism.
To obtain this result, we introduce a novel notion of parallel quantum walk,
based on Childs' quantum walk. The target evolution unitary is approximated by
a truncated Taylor series, which is obtained by combining these quantum walks
in a parallel way. A lower bound is
established, showing that the -dependence of the gate depth achieved
in this work cannot be significantly improved.
Our algorithm is applied to simulating three physical models: the Heisenberg
model, the Sachdev-Ye-Kitaev model and a quantum chemistry model in second
quantization. By explicitly calculating the gate complexity for implementing
the oracles, we show that on all these models, the total gate depth of our
algorithm has a dependence in the
parallel setting.Comment: Minor revision. 55 pages, 6 figures, 1 tabl
- …