2,582 research outputs found
Quantum and classical strong direct product theorems and optimal time-space tradeoffs
A strong direct product theorem says that if we want to compute
independent instances of a function, using less than times
the resources needed for one instance, then our overall success
probability will be exponentially small in .
We establish such theorems for the classical as well as quantum
query complexity of the OR-function. This implies slightly
weaker direct product results for all total functions.
We prove a similar result for quantum communication
protocols computing instances of the disjointness function.
Our direct product theorems imply a time-space tradeoff
T^2S=\Om{N^3} for sorting items on a quantum computer, which
is optimal up to polylog factors. They also give several tight
time-space and communication-space tradeoffs for the problems of
Boolean matrix-vector multiplication and matrix multiplication
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute k independent
instances of a function, using less than k times the resources needed for one
instance, then our overall success probability will be exponentially small in
k. We establish such theorems for the classical as well as quantum query
complexity of the OR function. This implies slightly weaker direct product
results for all total functions. We prove a similar result for quantum
communication protocols computing k instances of the Disjointness function.
Our direct product theorems imply a time-space tradeoff T^2*S=Omega(N^3) for
sorting N items on a quantum computer, which is optimal up to polylog factors.
They also give several tight time-space and communication-space tradeoffs for
the problems of Boolean matrix-vector multiplication and matrix multiplication.Comment: 22 pages LaTeX. 2nd version: some parts rewritten, results are
essentially the same. A shorter version will appear in IEEE FOCS 0
Conditional Lower Bounds for Space/Time Tradeoffs
In recent years much effort has been concentrated towards achieving
polynomial time lower bounds on algorithms for solving various well-known
problems. A useful technique for showing such lower bounds is to prove them
conditionally based on well-studied hardness assumptions such as 3SUM, APSP,
SETH, etc. This line of research helps to obtain a better understanding of the
complexity inside P.
A related question asks to prove conditional space lower bounds on data
structures that are constructed to solve certain algorithmic tasks after an
initial preprocessing stage. This question received little attention in
previous research even though it has potential strong impact.
In this paper we address this question and show that surprisingly many of the
well-studied hard problems that are known to have conditional polynomial time
lower bounds are also hard when concerning space. This hardness is shown as a
tradeoff between the space consumed by the data structure and the time needed
to answer queries. The tradeoff may be either smooth or admit one or more
singularity points.
We reveal interesting connections between different space hardness
conjectures and present matching upper bounds. We also apply these hardness
conjectures to both static and dynamic problems and prove their conditional
space hardness.
We believe that this novel framework of polynomial space conjectures can play
an important role in expressing polynomial space lower bounds of many important
algorithmic problems. Moreover, it seems that it can also help in achieving a
better understanding of the hardness of their corresponding problems in terms
of time
How proofs are prepared at Camelot
We study a design framework for robust, independently verifiable, and
workload-balanced distributed algorithms working on a common input. An
algorithm based on the framework is essentially a distributed encoding
procedure for a Reed--Solomon code, which enables (a) robustness against
byzantine failures with intrinsic error-correction and identification of failed
nodes, and (b) independent randomized verification to check the entire
computation for correctness, which takes essentially no more resources than
each node individually contributes to the computation. The framework builds on
recent Merlin--Arthur proofs of batch evaluation of Williams~[{\em Electron.\
Colloq.\ Comput.\ Complexity}, Report TR16-002, January 2016] with the
observation that {\em Merlin's magic is not needed} for batch evaluation---mere
Knights can prepare the proof, in parallel, and with intrinsic
error-correction.
The contribution of this paper is to show that in many cases the verifiable
batch evaluation framework admits algorithms that match in total resource
consumption the best known sequential algorithm for solving the problem. As our
main result, we show that the -cliques in an -vertex graph can be counted
{\em and} verified in per-node time and space on
compute nodes, for any constant and
positive integer divisible by , where is the
exponent of matrix multiplication. This matches in total running time the best
known sequential algorithm, due to Ne{\v{s}}et{\v{r}}il and Poljak [{\em
Comment.~Math.~Univ.~Carolin.}~26 (1985) 415--419], and considerably improves
its space usage and parallelizability. Further results include novel algorithms
for counting triangles in sparse graphs, computing the chromatic polynomial of
a graph, and computing the Tutte polynomial of a graph.Comment: 42 p
Time-Space Tradeoffs for the Memory Game
A single-player game of Memory is played with distinct pairs of cards,
with the cards in each pair bearing identical pictures. The cards are laid
face-down. A move consists of revealing two cards, chosen adaptively. If these
cards match, i.e., they bear the same picture, they are removed from play;
otherwise, they are turned back to face down. The object of the game is to
clear all cards while minimizing the number of moves. Past works have
thoroughly studied the expected number of moves required, assuming optimal play
by a player has that has perfect memory. In this work, we study the Memory game
in a space-bounded setting.
We prove two time-space tradeoff lower bounds on algorithms (strategies for
the player) that clear all cards in moves while using at most bits of
memory. First, in a simple model where the pictures on the cards may only be
compared for equality, we prove that . This is tight:
it is easy to achieve essentially everywhere on this
tradeoff curve. Second, in a more general model that allows arbitrary
computations, we prove that . We prove this latter tradeoff
by modeling strategies as branching programs and extending a classic counting
argument of Borodin and Cook with a novel probabilistic argument. We conjecture
that the stronger tradeoff in fact holds even in
this general model
A New Quantum Lower Bound Method, with Applications to Direct Product Theorems and Time-Space Tradeoffs
We give a new version of the adversary method for proving lower bounds on
quantum query algorithms. The new method is based on analyzing the eigenspace
structure of the problem at hand. We use it to prove a new and optimal strong
direct product theorem for 2-sided error quantum algorithms computing k
independent instances of a symmetric Boolean function: if the algorithm uses
significantly less than k times the number of queries needed for one instance
of the function, then its success probability is exponentially small in k. We
also use the polynomial method to prove a direct product theorem for 1-sided
error algorithms for k threshold functions with a stronger bound on the success
probability. Finally, we present a quantum algorithm for evaluating solutions
to systems of linear inequalities, and use our direct product theorems to show
that the time-space tradeoff of this algorithm is close to optimal.Comment: 16 pages LaTeX. Version 2: title changed, proofs significantly
cleaned up and made selfcontained. This version to appear in the proceedings
of the STOC 06 conferenc
- …