110 research outputs found
BosonSampling with Lost Photons
BosonSampling is an intermediate model of quantum computation where
linear-optical networks are used to solve sampling problems expected to be hard
for classical computers. Since these devices are not expected to be universal
for quantum computation, it remains an open question of whether any
error-correction techniques can be applied to them, and thus it is important to
investigate how robust the model is under natural experimental imperfections,
such as losses and imperfect control of parameters. Here we investigate the
complexity of BosonSampling under photon losses---more specifically, the case
where an unknown subset of the photons are randomly lost at the sources. We
show that, if out of photons are lost, then we cannot sample
classically from a distribution that is -close (in total
variation distance) to the ideal distribution, unless a
machine can estimate the permanents of Gaussian
matrices in time. In particular, if is constant, this implies
that simulating lossy BosonSampling is hard for a classical computer, under
exactly the same complexity assumption used for the original lossless case.Comment: 12 pages. v2: extended concluding sectio
Measurement-Based Linear Optics
© 2017 American Physical Society. A major challenge in optical quantum processing is implementing large, stable interferometers. We offer a novel approach: virtual, measurement-based interferometers that are programed on the fly solely by the choice of homodyne measurement angles. The effects of finite squeezing are captured as uniform amplitude damping. We compare our proposal to existing (physical) interferometers and consider its performance for BosonSampling, which could demonstrate postclassical computational power in the near future. We prove its efficiency in time and squeezing (energy) in this setting
Boson Sampling is Robust to Small Errors in the Network Matrix
We demonstrate the robustness of BosonSampling to imperfections in the linear
optical network that cause a small deviation in the matrix it implements. We
show that applying a noisy matrix that is within of the
desired matrix in operator norm leads to an output distribution that is
within of the desired distribution in variation distance, where
is the number of photons. This lets us derive a sufficient tolerance each
beamsplitters and phaseshifters in the network.
This result considers only errors that result from the network encoding a
different unitary than desired, and not other sources of noise such as photon
loss and partial distinguishability.Comment: 8 page
Complexity Theory and its Applications in Linear Quantum Optics
This thesis is intended in part to summarize and also to contribute to the
newest developments in passive linear optics that have resulted, directly or
indirectly, from the somewhat shocking discovery in 2010 that the BosonSampling
problem is likely hard for a classical computer to simulate. In doing so, I
hope to provide a historic context for the original result, as well as an
outlook on the future of technology derived from these newer developments. An
emphasis is made in each section to provide a broader conceptual framework for
understanding the consequences of each result in light of the others. This
framework is intended to be comprehensible even without a deep understanding of
the topics themselves.
The first three chapters focus more closely on the BosonSampling result
itself, seeking to understand the computational complexity aspects of passive
linear optical networks, and what consequences this may have. Some effort is
spent discussing a number of issues inherent in the BosonSampling problem that
limit the scope of its applicability, and that are still active topics of
research. Finally, we describe two other linear optical settings that inherit
the same complexity as BosonSampling. The final chapters focus on how an
intuitive understanding of BosonSampling has led to developments in optical
metrology and other closely related fields. These developments suggest the
exciting possibility that quantum sensors may be viable in the next few years
with only marginal improvements in technology. Lastly, some open problems are
presented which are intended to lay out a course for future research that would
allow for a more complete picture of the scalability of the architecture
developed in these chapters.Comment: PhD thesis, 121 pages, 18 figure
Routes Towards Optical Quantum Technology --- New Architectures and Applications
This thesis is based upon the work I have done during my PhD candidature at
Macquarie University. In this work we develop quantum technologies that are
directed towards realising a quantum computer. Specifically, we have made many
theoretical advancements in a type of quantum information processing protocol
called BosonSampling. This device efficiently simulates the interaction of
quantum particles called bosons, which no classical computer can efficiently
simulate. In this thesis we explore quantum random walks, which are the basis
of how the bosons in BosonSampling interfere with each other. We explore
implementing BosonSampling using the most readily available photon source
technology. We invented a completely new architecture which can implement
BosonSampling in time rather than space and has since been used to make the
worlds largest BosonSampling experiment ever performed. We look at variations
to the traditional BosonSampling architecture by considering other quantum
states of light. We show a worlds first application inspired by BosonSampling
in quantum metrology where measurements may be made more accurately than with
any classical method. Lastly, dealing with BosonSampling, we look at
reformulating the formalism of BosonSampling using a quantum optics approach.
In addition, but not related to BosonSampling, we show a protocol for
efficiently generating large-photon Fock states, which are a type of quantum
state of light, that are useful for quantum computation. Also, we show a method
for generating a specific quantum state of light that is useful for quantum
error correction --- an essential component of realising a quantum computer ---
by coupling together light and atoms.Comment: PhD Thesi
Marginal probabilities in boson samplers with arbitrary input states
With the recent claim of a quantum advantage demonstration in photonics by
Zhong et al, the question of the computation of lower-order approximations of
boson sampling with arbitrary quantum states at arbitrary distinguishability
has come to the fore. In this work, we present results in this direction,
building on the results of Clifford and Clifford. In particular, we show:
1) How to compute marginal detection probabilities (i.e. probabilities of the
detection of some but not all photons) for arbitrary quantum states.
2) Using the first result, how to generalize the sampling algorithm of
Clifford and Clifford to arbitrary photon distinguishabilities and arbitrary
input quantum states.
3) How to incorporate truncations of the quantum interference into a sampling
algorithm.
4) A remark considering maximum likelihood verification of the recent
photonic quantum advantage experiment
Spoofing cross entropy measure in boson sampling
Cross entropy measure is a widely used benchmarking to demonstrate quantum
computational advantage from sampling problems, such as random circuit sampling
using superconducting qubits and boson sampling. In this work, we propose a
heuristic classical algorithm that generates heavy outcomes of the ideal boson
sampling distribution and consequently achieves a large cross entropy. The key
idea is that there exist classical samplers that are efficiently simulable and
correlate with the ideal boson sampling probability distribution and that the
correlation can be used to post-select heavy outcomes of the ideal probability
distribution, which essentially leads to a large cross entropy. As a result,
our algorithm achieves a large cross entropy score by selectively generating
heavy outcomes without simulating ideal boson sampling. We first show that for
small-size circuits, the algorithm can even score a better cross entropy than
the ideal distribution of boson sampling. We then demonstrate that our method
scores a better cross entropy than the recent Gaussian boson sampling
experiments when implemented at intermediate, verifiable system sizes. Much
like current state-of-the-art experiments, we cannot verify that our spoofer
works for quantum advantage size systems. However, we demonstrate our approach
works for much larger system sizes in fermion sampling, where we can
efficiently compute output probabilities.Comment: 14 pages, 11 figure
Tensor network algorithm for simulating experimental Gaussian boson sampling
Gaussian boson sampling is a promising candidate for showing experimental
quantum advantage. While there is evidence that noiseless Gaussian boson
sampling is hard to efficiently simulate using a classical computer, the
current Gaussian boson sampling experiments inevitably suffer from loss and
other noise models. Despite a high photon loss rate and the presence of noise,
they are currently claimed to be hard to classically simulate with the
best-known classical algorithm. In this work, we present a classical
tensor-network algorithm that simulates Gaussian boson sampling and whose
complexity can be significantly reduced when the photon loss rate is high. By
generalizing the existing thermal-state approximation algorithm of lossy
Gaussian boson sampling, the proposed algorithm enables us to achieve increased
accuracy as the running time of the algorithm scales, as opposed to the
algorithm that samples from the thermal state, which can give only a fixed
accuracy. The generalization allows us to assess the computational power of
current lossy experiments even though their output state is not believed to be
close to a thermal state. We then simulate the largest Gaussian boson sampling
implemented in experiments so far. Much like the actual experiments,
classically verifying this large-scale simulation is challenging. To do this,
we first observe that in our smaller-scale simulations the total variation
distance, cross-entropy, and two-point correlation benchmarks all coincide.
Based on this observation, we demonstrate for large-scale experiments that our
sampler matches the ground-truth two-point and higher-order correlation
functions better than the experiment does, exhibiting evidence that our sampler
can simulate the ground-truth distribution better than the experiment can.Comment: 20 pages, 10 figure
- …