29,188 research outputs found
Analysing correlated noise on the surface code using adaptive decoding algorithms
Laboratory hardware is rapidly progressing towards a state where quantum
error-correcting codes can be realised. As such, we must learn how to deal with
the complex nature of the noise that may occur in real physical systems. Single
qubit Pauli errors are commonly used to study the behaviour of error-correcting
codes, but in general we might expect the environment to introduce correlated
errors to a system. Given some knowledge of structures that errors commonly
take, it may be possible to adapt the error-correction procedure to compensate
for this noise, but performing full state tomography on a physical system to
analyse this structure quickly becomes impossible as the size increases beyond
a few qubits. Here we develop and test new methods to analyse blue a particular
class of spatially correlated errors by making use of parametrised families of
decoding algorithms. We demonstrate our method numerically using a diffusive
noise model. We show that information can be learnt about the parameters of the
noise model, and additionally that the logical error rates can be improved. We
conclude by discussing how our method could be utilised in a practical setting
blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected
some references added; v3 - accepted to Quantu
Multi-path Summation for Decoding 2D Topological Codes
Fault tolerance is a prerequisite for scalable quantum computing.
Architectures based on 2D topological codes are effective for near-term
implementations of fault tolerance. To obtain high performance with these
architectures, we require a decoder which can adapt to the wide variety of
error models present in experiments. The typical approach to the problem of
decoding the surface code is to reduce it to minimum-weight perfect matching in
a way that provides a suboptimal threshold error rate, and is specialized to
correct a specific error model. Recently, optimal threshold error rates for a
variety of error models have been obtained by methods which do not use
minimum-weight perfect matching, showing that such thresholds can be achieved
in polynomial time. It is an open question whether these results can also be
achieved by minimum-weight perfect matching. In this work, we use belief
propagation and a novel algorithm for producing edge weights to increase the
utility of minimum-weight perfect matching for decoding surface codes. This
allows us to correct depolarizing errors using the rotated surface code,
obtaining a threshold of . This is larger than the threshold
achieved by previous matching-based decoders (), though
still below the known upper bound of .Comment: 19 pages, 13 figures, published in Quantum, available at
https://quantum-journal.org/papers/q-2018-10-19-102
Principles and Parameters: a coding theory perspective
We propose an approach to Longobardi's parametric comparison method (PCM) via
the theory of error-correcting codes. One associates to a collection of
languages to be analyzed with the PCM a binary (or ternary) code with one code
words for each language in the family and each word consisting of the binary
values of the syntactic parameters of the language, with the ternary case
allowing for an additional parameter state that takes into account phenomena of
entailment of parameters. The code parameters of the resulting code can be
compared with some classical bounds in coding theory: the asymptotic bound, the
Gilbert-Varshamov bound, etc. The position of the code parameters with respect
to some of these bounds provides quantitative information on the variability of
syntactic parameters within and across historical-linguistic families. While
computations carried out for languages belonging to the same family yield codes
below the GV curve, comparisons across different historical families can give
examples of isolated codes lying above the asymptotic bound.Comment: 11 pages, LaTe
Fault-Tolerant Measurement-Based Quantum Computing with Continuous-Variable Cluster States
A long-standing open question about Gaussian continuous-variable cluster
states is whether they enable fault-tolerant measurement-based quantum
computation. The answer is yes. Initial squeezing in the cluster above a
threshold value of 20.5 dB ensures that errors from finite squeezing acting on
encoded qubits are below the fault-tolerance threshold of known qubit-based
error-correcting codes. By concatenating with one of these codes and using
ancilla-based error correction, fault-tolerant measurement-based quantum
computation of theoretically indefinite length is possible with finitely
squeezed cluster states.Comment: (v3) consistent with published version, more accessible for general
audience; (v2) condensed presentation, added references on GKP state
generation and a comparison of currently achievable squeezing to the
threshold; (v1) 13 pages, a few figure
- …