45 research outputs found
Exact Boson Sampling using Gaussian continuous variable measurements
BosonSampling is a quantum mechanical task involving Fock basis state
preparation and detection and evolution using only linear interactions. A
classical algorithm for producing samples from this quantum task cannot be
efficient unless the polynomial hierarchy of complexity classes collapses, a
situation believe to be highly implausible. We present method for constructing
a device which uses Fock state preparations, linear interactions and Gaussian
continuous-variable measurements for which one can show exact sampling would be
hard for a classical algorithm in the same way as Boson Sampling. The detection
events used from this arrangement does not allow a similar conclusion for the
classical hardness of approximate sampling to be drawn. We discuss the details
of this result outlining some specific properties that approximate sampling
hardness requires
Boson Sampling from Gaussian States
We pose a generalized Boson Sampling problem. Strong evidence exists that
such a problem becomes intractable on a classical computer as a function of the
number of Bosons. We describe a quantum optical processor that can solve this
problem efficiently based on Gaussian input states, a linear optical network
and non-adaptive photon counting measurements. All the elements required to
build such a processor currently exist. The demonstration of such a device
would provide the first empirical evidence that quantum computers can indeed
outperform classical computers and could lead to applications
Quantum characterization of superconducting photon counters
We address the quantum characterization of photon counters based on
transition-edge sensors (TESs) and present the first experimental tomography of
the positive operator-valued measure (POVM) of a TES. We provide the reliable
tomographic reconstruction of the POVM elements up to 11 detected photons and
M=100 incoming photons, demonstrating that it is a linear detector.Comment: 3 figures, NJP (to appear
Spectral thresholding quantum tomography for low rank states
The estimation of high dimensional quantum states is an important statistical problem arising in current quantum technology applications. A key example is the tomography of multiple ions states, employed in the validation of state preparation in ion trap experiments (Häffner et al 2005 Nature 438 643). Since full tomography becomes unfeasible even for a small number of ions, there is a need to investigate lower dimensional statistical models which capture prior information about the state, and to devise estimation methods tailored to such models. In this paper we propose several new methods aimed at the efficient estimation of low rank states and analyse their performance for multiple ions tomography. All methods consist in first computing the least squares estimator, followed by its truncation to an appropriately chosen smaller rank. The latter is done by setting eigenvalues below a certain 'noise level' to zero, while keeping the rest unchanged, or normalizing them appropriately. We show that (up to logarithmic factors in the space dimension) the mean square error of the resulting estimators scales as where r is the rank, is the dimension of the Hilbert space, and N is the number of quantum samples. Furthermore we establish a lower bound for the asymptotic minimax risk which shows that the above scaling is optimal. The performance of the estimators is analysed in an extensive simulations study, with emphasis on the dependence on the state rank, and the number of measurement repetitions. We find that all estimators perform significantly better than the least squares, with the 'physical estimator' (which is a bona fide density matrix) slightly outperforming the other estimators
From the Bloch sphere to phase space representations with the Gottesman-Kitaev-Preskill encoding
In this work, we study the Wigner phase-space representation of qubit states
encoded in continuous variables (CV) by using the Gottesman-Kitaev-Preskill
(GKP) mapping. We explore a possible connection between resources for universal
quantum computation in discrete-variable (DV) systems, i.e. non-stabilizer
states, and negativity of the Wigner function in CV architectures, which is a
necessary requirement for quantum advantage. In particular, we show that the
lowest Wigner logarithmic negativity of qubit states encoded in CV with the GKP
mapping corresponds to encoded stabilizer states, while the maximum negativity
is associated with the most non-stabilizer states, H-type and T-type quantum
states.Comment: (v1) Accepted for publication in the Springer's "Mathematics for
Industry" series. (v2) Typo in the abstract fixed; URL of the conference
where the paper has been presented added: International Symposium on
Mathematics, Quantum Theory, and Cryptography (MQC), held in September 2019
in Fukuoka, Japan (https://www.mqc2019.org/mqc2019/program
No imminent quantum supremacy by boson sampling
It is predicted that quantum computers will dramatically outperform their
conventional counterparts. However, large-scale universal quantum computers are
yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to
the platform of photons in linear optics, which has sparked interest as a rapid
way to demonstrate this quantum supremacy. Photon statistics are governed by
intractable matrix functions known as permanents, which suggests that sampling
from the distribution obtained by injecting photons into a linear-optical
network could be solved more quickly by a photonic experiment than by a
classical computer. The contrast between the apparently awesome challenge faced
by any classical sampling algorithm and the apparently near-term experimental
resources required for a large boson sampling experiment has raised
expectations that quantum supremacy by boson sampling is on the horizon. Here
we present classical boson sampling algorithms and theoretical analyses of
prospects for scaling boson sampling experiments, showing that near-term
quantum supremacy via boson sampling is unlikely. While the largest boson
sampling experiments reported so far are with 5 photons, our classical
algorithm, based on Metropolised independence sampling (MIS), allowed the boson
sampling problem to be solved for 30 photons with standard computing hardware.
We argue that the impact of experimental photon losses means that demonstrating
quantum supremacy by boson sampling would require a step change in technology.Comment: 25 pages, 9 figures. Comments welcom
Quantum teleportation on a photonic chip
Quantum teleportation is a fundamental concept in quantum physics which now
finds important applications at the heart of quantum technology including
quantum relays, quantum repeaters and linear optics quantum computing (LOQC).
Photonic implementations have largely focussed on achieving long distance
teleportation due to its suitability for decoherence-free communication.
Teleportation also plays a vital role in the scalability of photonic quantum
computing, for which large linear optical networks will likely require an
integrated architecture. Here we report the first demonstration of quantum
teleportation in which all key parts - entanglement preparation, Bell-state
analysis and quantum state tomography - are performed on a reconfigurable
integrated photonic chip. We also show that a novel element-wise
characterisation method is critical to mitigate component errors, a key
technique which will become increasingly important as integrated circuits reach
higher complexities necessary for quantum enhanced operation.Comment: Originally submitted version - refer to online journal for accepted
manuscript; Nature Photonics (2014
Rank-based model selection for multiple ions quantum tomography
The statistical analysis of measurement data has become a key component of
many quantum engineering experiments. As standard full state tomography becomes
unfeasible for large dimensional quantum systems, one needs to exploit prior
information and the "sparsity" properties of the experimental state in order to
reduce the dimensionality of the estimation problem. In this paper we propose
model selection as a general principle for finding the simplest, or most
parsimonious explanation of the data, by fitting different models and choosing
the estimator with the best trade-off between likelihood fit and model
complexity. We apply two well established model selection methods -- the Akaike
information criterion (AIC) and the Bayesian information criterion (BIC) -- to
models consising of states of fixed rank and datasets such as are currently
produced in multiple ions experiments. We test the performance of AIC and BIC
on randomly chosen low rank states of 4 ions, and study the dependence of the
selected rank with the number of measurement repetitions for one ion states. We
then apply the methods to real data from a 4 ions experiment aimed at creating
a Smolin state of rank 4. The two methods indicate that the optimal model for
describing the data lies between ranks 6 and 9, and the Pearson test
is applied to validate this conclusion. Additionally we find that the mean
square error of the maximum likelihood estimator for pure states is close to
that of the optimal over all possible measurements.Comment: 24 pages, 6 figures, 3 table