BosonSampling is an intermediate model of quantum computation where
linear-optical networks are used to solve sampling problems expected to be hard
for classical computers. Since these devices are not expected to be universal
for quantum computation, it remains an open question of whether any
error-correction techniques can be applied to them, and thus it is important to
investigate how robust the model is under natural experimental imperfections,
such as losses and imperfect control of parameters. Here we investigate the
complexity of BosonSampling under photon losses---more specifically, the case
where an unknown subset of the photons are randomly lost at the sources. We
show that, if k out of n photons are lost, then we cannot sample
classically from a distribution that is 1/nΘ(k)-close (in total
variation distance) to the ideal distribution, unless a
BPPNP machine can estimate the permanents of Gaussian
matrices in nO(k) time. In particular, if k is constant, this implies
that simulating lossy BosonSampling is hard for a classical computer, under
exactly the same complexity assumption used for the original lossless case.Comment: 12 pages. v2: extended concluding sectio