1,533 research outputs found
Transforming Bell's Inequalities into State Classifiers with Machine Learning
Quantum information science has profoundly changed the ways we understand,
store, and process information. A major challenge in this field is to look for
an efficient means for classifying quantum state. For instance, one may want to
determine if a given quantum state is entangled or not. However, the process of
a complete characterization of quantum states, known as quantum state
tomography, is a resource-consuming operation in general. An attractive
proposal would be the use of Bell's inequalities as an entanglement witness,
where only partial information of the quantum state is needed. The problem is
that entanglement is necessary but not sufficient for violating Bell's
inequalities, making it an unreliable state classifier. Here we aim at solving
this problem by the methods of machine learning. More precisely, given a family
of quantum states, we randomly picked a subset of it to construct a
quantum-state classifier, accepting only partial information of each quantum
state. Our results indicated that these transformed Bell-type inequalities can
perform significantly better than the original Bell's inequalities in
classifying entangled states. We further extended our analysis to three-qubit
and four-qubit systems, performing classification of quantum states into
multiple species. These results demonstrate how the tools in machine learning
can be applied to solving problems in quantum information science
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
- …