4 research outputs found

    The XYZ2^2 hexagonal stabilizer code

    Get PDF
    We consider a topological stabilizer code on a honeycomb grid, the "XYZ2^2" code. The code is inspired by the Kitaev honeycomb model and is a simple realization of a "matching code" discussed by Wootton [J. Phys. A: Math. Theor. 48, 215302 (2015)], with a specific implementation of the boundary. It utilizes weight-six (XYZXYZXYZXYZ) plaquette stabilizers and weight-two (XXXX) link stabilizers on a planar hexagonal grid composed of 2d22d^2 qubits for code distance dd, with weight-three stabilizers at the boundary, stabilizing one logical qubit. We study the properties of the code using maximum-likelihood decoding, assuming perfect stabilizer measurements. For pure XX, YY, or ZZ noise, we can solve for the logical failure rate analytically, giving a threshold of 50%. In contrast to the rotated surface code and the XZZX code, which have code distance d2d^2 only for pure YY noise, here the code distance is 2d22d^2 for both pure ZZ and pure YY noise. Thresholds for noise with finite ZZ bias are similar to the XZZX code, but with markedly lower sub-threshold logical failure rates. The code possesses distinctive syndrome properties with unidirectional pairs of plaquette defects along the three directions of the triangular lattice for isolated errors, which may be useful for efficient matching-based or other approximate decoding.Comment: 15 pages, 7 figure

    Data-driven decoding of quantum error correcting codes using graph neural networks

    Full text link
    To leverage the full potential of quantum error-correcting stabilizer codes it is crucial to have an efficient and accurate decoder. Accurate, maximum likelihood, decoders are computationally very expensive whereas decoders based on more efficient algorithms give sub-optimal performance. In addition, the accuracy will depend on the quality of models and estimates of error rates for idling qubits, gates, measurements, and resets, and will typically assume symmetric error channels. In this work, instead, we explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN). The decoding problem is formulated as a graph classification task in which a set of stabilizer measurements is mapped to an annotated detector graph for which the neural network predicts the most likely logical error class. We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated experimental data, even if the matching decoder is given full information of the underlying error model. Although training is computationally demanding, inference is fast and scales approximately linearly with the space-time volume of the code. We also find that we can use large, but more limited, datasets of real experimental data [Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code, giving decoding accuracies that are on par with minimum weight perfect matching. The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction, which is competitive in terms of speed, accuracy, and versatility.Comment: 15 pages, 12 figure

    Error-rate-agnostic decoding of topological stabilizer codes

    Get PDF
    Efficient high-performance decoding of topological stabilizer codes has the potential to crucially improve the balance between logical failure rates and the number and individual error rates of the constituent qubits. High-threshold maximum-likelihood decoders require an explicit error model for Pauli errors to decode a specific syndrome, whereas lower-threshold heuristic approaches such as minimum weight matching are "error agnostic". Here we consider an intermediate approach, formulating a decoder that depends on the bias, i.e., the relative probability of phase-flip to bit-flip errors, but is agnostic to error rate. Our decoder is based on counting the number and effective weight of the most likely error chains in each equivalence class of a given syndrome. We use Metropolis-based Monte Carlo sampling to explore the space of error chains and find unique chains, that are efficiently identified using a hash table. Using the error-rate invariance the decoder can sample chains effectively at an error rate which is higher than the physical error rate and without the need for "thermalization" between chains in different equivalence classes. Applied to the surface code and the XZZX code, the decoder matches maximum-likelihood decoders for moderate code sizes or low error rates. We anticipate that, because of the compressed information content per syndrome, it can be taken full advantage of in combination with machine-learning methods to extrapolate Monte Carlo-generated data.Comment: 15 pages, 9 figures; V2 Added analysis of low error-rate performanc
    corecore