2,246 research outputs found
Analysing correlated noise on the surface code using adaptive decoding algorithms
Laboratory hardware is rapidly progressing towards a state where quantum
error-correcting codes can be realised. As such, we must learn how to deal with
the complex nature of the noise that may occur in real physical systems. Single
qubit Pauli errors are commonly used to study the behaviour of error-correcting
codes, but in general we might expect the environment to introduce correlated
errors to a system. Given some knowledge of structures that errors commonly
take, it may be possible to adapt the error-correction procedure to compensate
for this noise, but performing full state tomography on a physical system to
analyse this structure quickly becomes impossible as the size increases beyond
a few qubits. Here we develop and test new methods to analyse blue a particular
class of spatially correlated errors by making use of parametrised families of
decoding algorithms. We demonstrate our method numerically using a diffusive
noise model. We show that information can be learnt about the parameters of the
noise model, and additionally that the logical error rates can be improved. We
conclude by discussing how our method could be utilised in a practical setting
blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected
some references added; v3 - accepted to Quantu
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
The Devil is in the Decoder: Classification, Regression and GANs
Many machine vision applications, such as semantic segmentation and depth
prediction, require predictions for every pixel of the input image. Models for
such problems usually consist of encoders which decrease spatial resolution
while learning a high-dimensional representation, followed by decoders who
recover the original input resolution and result in low-dimensional
predictions. While encoders have been studied rigorously, relatively few
studies address the decoder side. This paper presents an extensive comparison
of a variety of decoders for a variety of pixel-wise tasks ranging from
classification, regression to synthesis. Our contributions are: (1) Decoders
matter: we observe significant variance in results between different types of
decoders on various problems. (2) We introduce new residual-like connections
for decoders. (3) We introduce a novel decoder: bilinear additive upsampling.
(4) We explore prediction artifacts
Learning to Reconstruct Texture-less Deformable Surfaces from a Single View
Recent years have seen the development of mature solutions for reconstructing
deformable surfaces from a single image, provided that they are relatively
well-textured. By contrast, recovering the 3D shape of texture-less surfaces
remains an open problem, and essentially relates to Shape-from-Shading. In this
paper, we introduce a data-driven approach to this problem. We introduce a
general framework that can predict diverse 3D representations, such as meshes,
normals, and depth maps. Our experiments show that meshes are ill-suited to
handle texture-less 3D reconstruction in our context. Furthermore, we
demonstrate that our approach generalizes well to unseen objects, and that it
yields higher-quality reconstructions than a state-of-the-art SfS technique,
particularly in terms of normal estimates. Our reconstructions accurately model
the fine details of the surfaces, such as the creases of a T-Shirt worn by a
person.Comment: Accepted to 3DV 201
Data-driven decoding of quantum error correcting codes using graph neural networks
To leverage the full potential of quantum error-correcting stabilizer codes
it is crucial to have an efficient and accurate decoder. Accurate, maximum
likelihood, decoders are computationally very expensive whereas decoders based
on more efficient algorithms give sub-optimal performance. In addition, the
accuracy will depend on the quality of models and estimates of error rates for
idling qubits, gates, measurements, and resets, and will typically assume
symmetric error channels. In this work, instead, we explore a model-free,
data-driven, approach to decoding, using a graph neural network (GNN). The
decoding problem is formulated as a graph classification task in which a set of
stabilizer measurements is mapped to an annotated detector graph for which the
neural network predicts the most likely logical error class. We show that the
GNN-based decoder can outperform a matching decoder for circuit level noise on
the surface code given only simulated experimental data, even if the matching
decoder is given full information of the underlying error model. Although
training is computationally demanding, inference is fast and scales
approximately linearly with the space-time volume of the code. We also find
that we can use large, but more limited, datasets of real experimental data
[Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code,
giving decoding accuracies that are on par with minimum weight perfect
matching. The results show that a purely data-driven approach to decoding may
be a viable future option for practical quantum error correction, which is
competitive in terms of speed, accuracy, and versatility.Comment: 15 pages, 12 figure
Evaluating Neural Network Decoder Performance for Quantum Error Correction Using Various Data Generation Models
Neural networks have been shown in the past to perform quantum error correction (QEC) decoding with greater accuracy and efficiency than algorithmic decoders. Because the qubits in a quantum computer are volatile and only usable on the order of milliseconds before they decohere, a means of fast quantum error correction is necessary in order to correct data qubit errors within the time budget of a quantum algorithm. Algorithmic decoders are good at resolving errors on logical qubits with only a few data qubits, but are less efficient in systems containing more data qubits. With neural network decoders, practical quantum computation becomes much more realizable since the error corrective operations are calculated much faster than with the MWPM or partial lookup table implementations. This research is aimed at furthering neural network QEC decoder research by generating exhaustive and randomly sampled data sets using high-performance computing algorithms to evaluate the effect of data set generation methods on the effectiveness of these neural networks compared to similar models. The results of this work show that different data sets affect various performance metrics including accuracy, F1 score, area under the receiver operating characteristic curve, and QEC cycles
- …