348 research outputs found
Quantum vs. Classical Read-once Branching Programs
The paper presents the first nontrivial upper and lower bounds for
(non-oblivious) quantum read-once branching programs. It is shown that the
computational power of quantum and classical read-once branching programs is
incomparable in the following sense: (i) A simple, explicit boolean function on
2n input bits is presented that is computable by error-free quantum read-once
branching programs of size O(n^3), while each classical randomized read-once
branching program and each quantum OBDD for this function with bounded
two-sided error requires size 2^{\Omega(n)}. (ii) Quantum branching programs
reading each input variable exactly once are shown to require size
2^{\Omega(n)} for computing the set-disjointness function DISJ_n from
communication complexity theory with two-sided error bounded by a constant
smaller than 1/2-2\sqrt{3}/7. This function is trivially computable even by
deterministic OBDDs of linear size. The technically most involved part is the
proof of the lower bound in (ii). For this, a new model of quantum
multi-partition communication protocols is introduced and a suitable extension
of the information cost technique of Jain, Radhakrishnan, and Sen (2003) to
this model is presented.Comment: 35 pages. Lower bound for disjointness: Error in application of info
theory corrected and regularity of quantum read-once BPs (each variable at
least once) added as additional assumption of the theorem. Some more informal
explanations adde
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute k independent
instances of a function, using less than k times the resources needed for one
instance, then our overall success probability will be exponentially small in
k. We establish such theorems for the classical as well as quantum query
complexity of the OR function. This implies slightly weaker direct product
results for all total functions. We prove a similar result for quantum
communication protocols computing k instances of the Disjointness function.
Our direct product theorems imply a time-space tradeoff T^2*S=Omega(N^3) for
sorting N items on a quantum computer, which is optimal up to polylog factors.
They also give several tight time-space and communication-space tradeoffs for
the problems of Boolean matrix-vector multiplication and matrix multiplication.Comment: 22 pages LaTeX. 2nd version: some parts rewritten, results are
essentially the same. A shorter version will appear in IEEE FOCS 0
A Lower Bound for Sampling Disjoint Sets
Suppose Alice and Bob each start with private randomness and no other input, and they wish to engage in a protocol in which Alice ends up with a set x subseteq[n] and Bob ends up with a set y subseteq[n], such that (x,y) is uniformly distributed over all pairs of disjoint sets. We prove that for some constant beta0 of the uniform distribution over all pairs of disjoint sets of size sqrt{n}
Non-locality and Communication Complexity
Quantum information processing is the emerging field that defines and
realizes computing devices that make use of quantum mechanical principles, like
the superposition principle, entanglement, and interference. In this review we
study the information counterpart of computing. The abstract form of the
distributed computing setting is called communication complexity. It studies
the amount of information, in terms of bits or in our case qubits, that two
spatially separated computing devices need to exchange in order to perform some
computational task. Surprisingly, quantum mechanics can be used to obtain
dramatic advantages for such tasks.
We review the area of quantum communication complexity, and show how it
connects the foundational physics questions regarding non-locality with those
of communication complexity studied in theoretical computer science. The first
examples exhibiting the advantage of the use of qubits in distributed
information-processing tasks were based on non-locality tests. However, by now
the field has produced strong and interesting quantum protocols and algorithms
of its own that demonstrate that entanglement, although it cannot be used to
replace communication, can be used to reduce the communication exponentially.
In turn, these new advances yield a new outlook on the foundations of physics,
and could even yield new proposals for experiments that test the foundations of
physics.Comment: Survey paper, 63 pages LaTeX. A reformatted version will appear in
Reviews of Modern Physic
The Power of One Clean Qubit in Communication Complexity
We study quantum communication protocols, in which the players\u27 storage starts out in a state where one qubit is in a pure state, and all other qubits are totally mixed (i.e. in a random state), and no other storage is available (for messages or internal computations). This restriction on the available quantum memory has been studied extensively in the model of quantum circuits, and it is known that classically simulating quantum circuits operating on such memory is hard when the additive error of the simulation is exponentially small (in the input length), under the assumption that the polynomial hierarchy does not collapse.
We study this setting in communication complexity. The goal is to consider larger additive error for simulation-hardness results, and to not use unproven assumptions.
We define a complexity measure for this model that takes into account that standard error reduction techniques do not work here. We define a clocked and a semi-unclocked model, and describe efficient simulations between those.
We characterize a one-way communication version of the model in terms of weakly unbounded error communication complexity.
Our main result is that there is a quantum protocol using one clean qubit only and using O(log n) qubits of communication, such that any classical protocol simulating the acceptance behaviour of the quantum protocol within additive error 1/poly(n) needs communication ?(n).
We also describe a candidate problem, for which an exponential gap between the one-clean-qubit communication complexity and the randomized communication complexity is likely to hold, and hence a classical simulation of the one-clean-qubit model within constant additive error might be hard in communication complexity. We describe a geometrical conjecture that implies the lower bound
Cumulative Memory Lower Bounds for Randomized and Quantum Computation
Cumulative memory -- the sum of space used per step over the duration of a
computation -- is a fine-grained measure of time-space complexity that was
introduced to analyze cryptographic applications like password hashing. It is a
more accurate cost measure for algorithms that have infrequent spikes in memory
usage and are run in environments such as cloud computing that allow dynamic
allocation and de-allocation of resources during execution, or when many
multiple instances of an algorithm are interleaved in parallel.
We prove the first lower bounds on cumulative memory complexity for both
sequential classical computation and quantum circuits. Moreover, we develop
general paradigms for bounding cumulative memory complexity inspired by the
standard paradigms for proving time-space tradeoff lower bounds that can only
lower bound the maximum space used during an execution. The resulting lower
bounds on cumulative memory that we obtain are just as strong as the best
time-space tradeoff lower bounds, which are very often known to be tight.
Although previous results for pebbling and random oracle models have yielded
time-space tradeoff lower bounds larger than the cumulative memory complexity,
our results show that in general computational models such separations cannot
follow from known lower bound techniques and are not true for many functions.
Among many possible applications of our general methods, we show that any
classical sorting algorithm with success probability at least
requires cumulative memory , any
classical matrix multiplication algorithm requires cumulative memory
, any quantum sorting circuit requires cumulative memory
, and any quantum circuit that finds disjoint collisions in
a random function requires cumulative memory .Comment: 42 pages, 4 figures, accepted to track A of ICALP 202
Communication Memento: Memoryless Communication Complexity
We study the communication complexity of computing functions
in the memoryless
communication model. Here, Alice is given , Bob is given and their goal is to compute F(x,y) subject to the following
constraint: at every round, Alice receives a message from Bob and her reply to
Bob solely depends on the message received and her input x; the same applies to
Bob. The cost of computing F in this model is the maximum number of bits
exchanged in any round between Alice and Bob (on the worst case input x,y). In
this paper, we also consider variants of our memoryless model wherein one party
is allowed to have memory, the parties are allowed to communicate quantum bits,
only one player is allowed to send messages. We show that our memoryless
communication model capture the garden-hose model of computation by Buhrman et
al. (ITCS'13), space bounded communication complexity by Brody et al. (ITCS'13)
and the overlay communication complexity by Papakonstantinou et al. (CCC'14).
Thus the memoryless communication complexity model provides a unified framework
to study space-bounded communication models. We establish the following: (1) We
show that the memoryless communication complexity of F equals the logarithm of
the size of the smallest bipartite branching program computing F (up to a
factor 2); (2) We show that memoryless communication complexity equals
garden-hose complexity; (3) We exhibit various exponential separations between
these memoryless communication models.
We end with an intriguing open question: can we find an explicit function F
and universal constant c>1 for which the memoryless communication complexity is
at least ? Note that would imply a
lower bound for general formula size, improving
upon the best lower bound by Ne\v{c}iporuk in 1966.Comment: 30 Pages; several improvements to the presentation
Augmented Index and Quantum Streaming Algorithms for DYCK(2)
We show how two recently developed quantum information theoretic tools can be applied to obtain lower bounds on quantum information complexity. We also develop new tools with potential for broader applicability, and use them to establish a lower bound on the quantum information complexity for the Augmented Index function on an easy distribution. This approach allows us to handle superpositions rather than distributions over inputs, the main technical challenge faced previously. By providing a quantum generalization of the argument of Jain and Nayak [IEEE TIT\u2714], we leverage this to obtain a lower bound on the space complexity of multi-pass, unidirectional quantum streaming algorithms for the DYCK(2) language
- âŠ