19,336 research outputs found
Supervised Learning with Quantum Measurements
This paper reports a novel method for supervised machine learning based on
the mathematical formalism that supports quantum mechanics. The method uses
projective quantum measurement as a way of building a prediction function.
Specifically, the relationship between input and output variables is
represented as the state of a bipartite quantum system. The state is estimated
from training samples through an averaging process that produces a density
matrix. Prediction of the label for a new sample is made by performing a
projective measurement on the bipartite system with an operator, prepared from
the new input sample, and applying a partial trace to obtain the state of the
subsystem representing the output. The method can be seen as a generalization
of Bayesian inference classification and as a type of kernel-based learning
method. One remarkable characteristic of the method is that it does not require
learning any parameters through optimization. We illustrate the method with
different 2-D classification benchmark problems and different quantum
information encodings.Comment: Supplementary material integrated into main text. Typos correcte
Quantum machine learning with adaptive linear optics
We study supervised learning algorithms in which a quantum device is used to
perform a computational subroutine - either for prediction via probability
estimation, or to compute a kernel via estimation of quantum states overlap. We
design implementations of these quantum subroutines using Boson Sampling
architectures in linear optics, supplemented by adaptive measurements. We then
challenge these quantum algorithms by deriving classical simulation algorithms
for the tasks of output probability estimation and overlap estimation. We
obtain different classical simulability regimes for these two computational
tasks in terms of the number of adaptive measurements and input photons. In
both cases, our results set explicit limits to the range of parameters for
which a quantum advantage can be envisaged with adaptive linear optics compared
to classical machine learning algorithms: we show that the number of input
photons and the number of adaptive measurements cannot be simultaneously small
compared to the number of modes. Interestingly, our analysis leaves open the
possibility of a near-term quantum advantage with a single adaptive
measurement.Comment: 16 + 5 pages, presented at AQIS2020, accepted in Quantu
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
- …