340 research outputs found
Quantum Certificate Complexity
Given a Boolean function f, we study two natural generalizations of the
certificate complexity C(f): the randomized certificate complexity RC(f) and
the quantum certificate complexity QC(f). Using Ambainis' adversary method, we
exactly characterize QC(f) as the square root of RC(f). We then use this result
to prove the new relation R0(f) = O(Q2(f)^2 Q0(f) log n) for total f, where R0,
Q2, and Q0 are zero-error randomized, bounded-error quantum, and zero-error
quantum query complexities respectively. Finally we give asymptotic gaps
between the measures, including a total f for which C(f) is superquadratic in
QC(f), and a symmetric partial f for which QC(f) = O(1) yet Q2(f) = Omega(n/log
n).Comment: 9 page
Distributed Quantum Proofs for Replicated Data
This paper tackles the issue of checking that all copies of a large data set replicated at several nodes of a network are identical. The fact that the replicas may be located at distant nodes prevents the system from verifying their equality locally, i.e., by having each node consult only nodes in its vicinity. On the other hand, it remains possible to assign certificates to the nodes, so that verifying the consistency of the replicas can be achieved locally. However, we show that, as the replicated data is large, classical certification mechanisms, including distributed Merlin-Arthur protocols, cannot guarantee good completeness and soundness simultaneously, unless they use very large certificates. The main result of this paper is a distributed quantum Merlin-Arthur protocol enabling the nodes to collectively check the consistency of the replicas, based on small certificates, and in a single round of message exchange between neighbors, with short messages. In particular, the certificate-size is logarithmic in the size of the data set, which gives an exponential advantage over classical certification mechanisms. We propose yet another usage of a fundamental quantum primitive, called the SWAP test, in order to show our main result
An exponential separation between MA and AM proofs of proximity
Interactive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as MA-proofs of Proximity (MAP), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as AM-proofs of Proximity (AMP), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both MAPs and AMPs is the total number of bits that the verifier observes - namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of MAPs and AMPs. Specifically, we exhibit an explicit and natural property Pi that admits an AMP with complexity O(log n), whereas any MAP for Pi has complexity Omega~(n^{1/4}), where n denotes the length of the input in bits. Our MAP lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting we show that AMPs can only be quadratically stronger than MAPs. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity
Quantum algorithms:an overview
Quantum computers are designed to outperform standard computers by running
quantum algorithms. Areas in which quantum algorithms can be applied include
cryptography, search and optimisation, simulation of quantum systems, and
solving large systems of linear equations. Here we briefly survey some known
quantum algorithms, with an emphasis on a broad overview of their applications
rather than their technical details. We include a discussion of recent
developments and near-term applications of quantum algorithms.Comment: 17 pages; short survey to appear in npj Quantum Information. v2:
minor corrections and clarification
Computational Complexity and Graph Isomorphism
The graph isomorphism problem is the computational problem of determining whether two ďŹnite graphs are isomorphic, that is, structurally the same. The complexity of graph isomorphism is an open problem and it is one of the few problems in NP which is neither known to be solvable in polynomial time nor NP-complete. It is one of the most researched open problems in theoretical computer science.
The foundations of computability theory are in recursion theory and in recursive functions which are an older model of computation than Turing machines. In this masterâs thesis we discuss the basics of the recursion theory and the main theorems starting from the axioms. The aim of the second chapter is to define the most important T- and m-reductions and the implication hierarchy between reductions.
Different variations of Turing machines include the nondeterministic and oracle Turing machines. They are discussed in the third chapter. A hierarchy of different complexity classes can be created by reducing the available computational resources of recursive functions. The members of this hierarchy include for instance P and NP. There are hundreds of known complexity classes and in this work the most important ones regarding graph isomorphism are introduced.
Boolean circuits are a different method for approaching computability. Some main results and complexity classes of circuit complexity are discussed in the fourth chapter. The aim is to show that graph isomorphism is hard for the class DET.
Graph isomorphism is known to belong to the classes coAM and SPP. These classes are introduced in the fifth chapter by using theory of probabilistic classes, polynomial hierarchy, interactive proof systems and Arthur-Merlin games. Polynomial hierarchy collapses to its second level if GI is NP-complete
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
A Hierarchy Theorem for Interactive Proofs of Proximity
The number of rounds, or round complexity, used in an interactive
protocol is a fundamental resource. In this work we consider the
significance of round complexity in the context of Interactive
Proofs of Proximity (IPPs). Roughly speaking, IPPs are interactive proofs in which the verifier runs in sublinear time and is only required to reject inputs that are far from the language.
Our main result is a round hierarchy theorem for IPPs, showing
that the power of IPPs grows with the number of rounds. More
specifically, we show that there exists a gap function
g(r) = Theta(r^2) such that for every constant r geq 1 there exists a language that (1) has a g(r)-round IPP with verification time t=t(n,r) but (2) does not have an r-round IPP with verification time t (or even verification time t\u27=poly(t)).
In fact, we prove a stronger result by exhibiting a single language L such that, for every constant r geq 1, there is an
O(r^2)-round IPP for L with t=n^{O(1/r)} verification time, whereas the verifier in any r-round IPP for L must run in time at least t^{100}. Moreover, we show an IPP for L with a poly-logarithmic number of rounds and only poly-logarithmic erification time, yielding a sub-exponential separation between the power of constant-round IPPs versus general (unbounded round) IPPs.
From our hierarchy theorem we also derive implications to standard
interactive proofs (in which the verifier can run in polynomial
time). Specifically, we show that the round reduction technique of
Babai and Moran (JCSS, 1988) is (almost) optimal among all blackbox transformations, and we show a connection to the algebrization framework of Aaronson and Wigderson (TOCT, 2009)
Recommended from our members
On Resilience to Computable Tampering
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS 2010), provide a means of encoding information such that if the encoding is tampered with, the result encodes something either identical or completely unrelated. Unlike error-correcting codes (for which the result of tampering must always be identical), non-malleable codes give guarantees even when tampering functions are allowed to change every symbol of a codeword.
In this thesis, we will provide constructions of non-malleable codes secure against a variety tampering classes with natural computational semantics:
⢠Bounded-Communication: Functions corresponding to 2-party protocols where each party receives half the input (respectively) and then may communicate </4 bits before returning their (respective) half of the tampered output.
â˘Local Functions (Juntas):} each tampered output bit is only a function of nš-áş inputs bits, where áş>0 is any constant (the efficiency of our code depends on áş). This class includes NCâ°.
â˘Decision Trees: each tampered output bit is a function of nš/â´-â°(š) adaptively chosen bits.
â˘Small-Depth Circuits: each tampered output bit is produced by a log(n)/log log(n)-depth circuit of polynomial size, for some constant . This class includes ACâ°.
â˘Low Degree Polynomials: each tampered output field element is produced by a low-degree (relative to the field size) polynomial.
â˘Polynomial-Size Circuit Tampering: each tampered codeword is produced by circuit of size áś where is any constant (the efficiency of our code depends on ). This result assumes that E is hard for exponential size nondeterministic circuits (all other results are unconditional).
We stress that our constructions are efficient (encoding and decoding can be performed in uniform polynomial time) and (with the exception of the last result, which assumes strong circuit lower bounds) enjoy unconditional, statistical security guarantees. We also illuminate some potential barriers to constructing codes for more complex computational classes from simpler assumptions
- âŚ