48,367 research outputs found
Computing the permanent of (some) complex matrices
We present a deterministic algorithm, which, for any given 0< epsilon < 1 and
an nxn real or complex matrix A=(a_{ij}) such that | a_{ij}-1| < 0.19 for all
i, j computes the permanent of A within relative error epsilon in n^{O(ln n -ln
epsilon)} time. The method can be extended to computing hafnians and
multidimensional permanents.Comment: 12 pages, results extended to hafnians and multidimensional
permanents, minor improvement
Approximating the Permanent of a Random Matrix with Vanishing Mean
We show an algorithm for computing the permanent of a random matrix with
vanishing mean in quasi-polynomial time. Among special cases are the Gaussian,
and biased-Bernoulli random matrices with mean 1/lnln(n)^{1/8}. In addition, we
can compute the permanent of a random matrix with mean 1/poly(ln(n)) in time
2^{O(n^{\eps})} for any small constant \eps>0. Our algorithm counters the
intuition that the permanent is hard because of the "sign problem" - namely the
interference between entries of a matrix with different signs. A major open
question then remains whether one can provide an efficient algorithm for random
matrices of mean 1/poly(n), whose conjectured #P-hardness is one of the
baseline assumptions of the BosonSampling paradigm
Simply Exponential Approximation of the Permanent of Positive Semidefinite Matrices
We design a deterministic polynomial time approximation algorithm for
the permanent of positive semidefinite matrices where . We write a natural convex relaxation and show that its optimum solution
gives a approximation of the permanent. We further show that this factor
is asymptotically tight by constructing a family of positive semidefinite
matrices
The Computational Complexity of Linear Optics
We give new evidence that quantum computers -- moreover, rudimentary quantum
computers built entirely out of linear-optical elements -- cannot be
efficiently simulated by classical computers. In particular, we define a model
of computation in which identical photons are generated, sent through a
linear-optical network, then nonadaptively measured to count the number of
photons in each mode. This model is not known or believed to be universal for
quantum computation, and indeed, we discuss the prospects for realizing the
model using current technology. On the other hand, we prove that the model is
able to solve sampling problems and search problems that are classically
intractable under plausible assumptions. Our first result says that, if there
exists a polynomial-time classical algorithm that samples from the same
probability distribution as a linear-optical network, then P^#P=BPP^NP, and
hence the polynomial hierarchy collapses to the third level. Unfortunately,
this result assumes an extremely accurate simulation. Our main result suggests
that even an approximate or noisy classical simulation would already imply a
collapse of the polynomial hierarchy. For this, we need two unproven
conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is
#P-hard to approximate the permanent of a matrix A of independent N(0,1)
Gaussian entries, with high probability over A; and the "Permanent
Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with
high probability over A. We present evidence for these conjectures, both of
which seem interesting even apart from our application. This paper does not
assume knowledge of quantum optics. Indeed, part of its goal is to develop the
beautiful theory of noninteracting bosons underlying our model, and its
connection to the permanent function, in a self-contained way accessible to
theoretical computer scientists.Comment: 94 pages, 4 figure
No imminent quantum supremacy by boson sampling
It is predicted that quantum computers will dramatically outperform their
conventional counterparts. However, large-scale universal quantum computers are
yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to
the platform of photons in linear optics, which has sparked interest as a rapid
way to demonstrate this quantum supremacy. Photon statistics are governed by
intractable matrix functions known as permanents, which suggests that sampling
from the distribution obtained by injecting photons into a linear-optical
network could be solved more quickly by a photonic experiment than by a
classical computer. The contrast between the apparently awesome challenge faced
by any classical sampling algorithm and the apparently near-term experimental
resources required for a large boson sampling experiment has raised
expectations that quantum supremacy by boson sampling is on the horizon. Here
we present classical boson sampling algorithms and theoretical analyses of
prospects for scaling boson sampling experiments, showing that near-term
quantum supremacy via boson sampling is unlikely. While the largest boson
sampling experiments reported so far are with 5 photons, our classical
algorithm, based on Metropolised independence sampling (MIS), allowed the boson
sampling problem to be solved for 30 photons with standard computing hardware.
We argue that the impact of experimental photon losses means that demonstrating
quantum supremacy by boson sampling would require a step change in technology.Comment: 25 pages, 9 figures. Comments welcom
- …