1,228 research outputs found
BosonSampling with Lost Photons
BosonSampling is an intermediate model of quantum computation where
linear-optical networks are used to solve sampling problems expected to be hard
for classical computers. Since these devices are not expected to be universal
for quantum computation, it remains an open question of whether any
error-correction techniques can be applied to them, and thus it is important to
investigate how robust the model is under natural experimental imperfections,
such as losses and imperfect control of parameters. Here we investigate the
complexity of BosonSampling under photon losses---more specifically, the case
where an unknown subset of the photons are randomly lost at the sources. We
show that, if out of photons are lost, then we cannot sample
classically from a distribution that is -close (in total
variation distance) to the ideal distribution, unless a
machine can estimate the permanents of Gaussian
matrices in time. In particular, if is constant, this implies
that simulating lossy BosonSampling is hard for a classical computer, under
exactly the same complexity assumption used for the original lossless case.Comment: 12 pages. v2: extended concluding sectio
The Computational Power of Non-interacting Particles
Shortened abstract: In this thesis, I study two restricted models of quantum
computing related to free identical particles.
Free fermions correspond to a set of two-qubit gates known as matchgates.
Matchgates are classically simulable when acting on nearest neighbors on a
path, but universal for quantum computing when acting on distant qubits or when
SWAP gates are available. I generalize these results in two ways. First, I show
that SWAP is only one in a large family of gates that uplift matchgates to
quantum universality. In fact, I show that the set of all matchgates plus any
nonmatchgate parity-preserving two-qubit gate is universal, and interpret this
fact in terms of local invariants of two-qubit gates. Second, I investigate the
power of matchgates in arbitrary connectivity graphs, showing they are
universal on any connected graph other than a path or a cycle, and classically
simulable on a cycle. I also prove the same dichotomy for the XY interaction.
Free bosons give rise to a model known as BosonSampling. BosonSampling
consists of (i) preparing a Fock state of n photons, (ii) interfering these
photons in an m-mode linear interferometer, and (iii) measuring the output in
the Fock basis. Sampling approximately from the resulting distribution should
be classically hard, under reasonable complexity assumptions. Here I show that
exact BosonSampling remains hard even if the linear-optical circuit has
constant depth. I also report several experiments where three-photon
interference was observed in integrated interferometers of various sizes,
providing some of the first implementations of BosonSampling in this regime.
The experiments also focus on the bosonic bunching behavior and on validation
of BosonSampling devices. This thesis contains descriptions of the numerical
analyses done on the experimental data, omitted from the corresponding
publications.Comment: PhD Thesis, defended at Universidade Federal Fluminense on March
2014. Final version, 208 pages. New results in Chapter 5 correspond to
arXiv:1106.1863, arXiv:1207.2126, and arXiv:1308.1463. New results in Chapter
6 correspond to arXiv:1212.2783, arXiv:1305.3188, arXiv:1311.1622 and
arXiv:1412.678
Regimes of classical simulability for noisy Gaussian boson sampling
As a promising candidate for exhibiting quantum computational supremacy,
Gaussian Boson Sampling (GBS) is designed to exploit the ease of experimental
preparation of Gaussian states. However, sufficiently large and inevitable
experimental noise might render GBS classically simulable. In this work, we
formalize this intuition by establishing a sufficient condition for approximate
polynomial-time classical simulation of noisy GBS --- in the form of an
inequality between the input squeezing parameter, the overall transmission rate
and the quality of photon detectors. Our result serves as a non-classicality
test that must be passed by any quantum computationalsupremacy demonstration
based on GBS. We show that, for most linear-optical architectures, where photon
loss increases exponentially with the circuit depth, noisy GBS loses its
quantum advantage in the asymptotic limit. Our results thus delineate
intermediate-sized regimes where GBS devices might considerably outperform
classical computers for modest noise levels. Finally, we find that increasing
the amount of input squeezing is helpful to evade our classical simulation
algorithm, which suggests a potential route to mitigate photon loss.Comment: 13 pages, 4 figures, final version accepted for publication in
Physical Review Letter
Dominant two-loop electroweak corrections to the hadroproduction of a pseudoscalar Higgs boson and its photonic decay
We present the dominant two-loop electroweak corrections to the partial decay
widths to gluon jets and prompt photons of the neutral CP-odd Higgs boson A^0,
with mass M_{A^0} < 2 M_W, in the two-Higgs-doublet model for low to
intermediate values of the ratio tan(beta) = v_2/v_1 of the vacuum expectation
values. They apply as they stand to the production cross sections in hadronic
and two-photon collisions, at the Tevatron, the LHC, and a future photon
collider. The appearance of three gamma_5 matrices in closed fermion loops
requires special care in the dimensional regularization of ultraviolet
divergences. The corrections are negative and amount to several percent, so
that they fully compensate or partly screen the enhancement due to QCD
corrections.Comment: 9 pages, 3 figure
- …