32 research outputs found

    Performance Analysis of Quantum Error-Correcting Codes via MacWilliams Identities

    Full text link
    One of the main challenges for an efficient implementation of quantum information technologies is how to counteract quantum noise. Quantum error correcting codes are therefore of primary interest for the evolution towards quantum computing and quantum Internet. We analyze the performance of stabilizer codes, one of the most important classes for practical implementations, on both symmetric and asymmetric quantum channels. To this aim, we first derive the weight enumerator (WE) for the undetectable errors of stabilizer codes based on the quantum MacWilliams identities. The WE is then used to evaluate the error rate of quantum codes under maximum likelihood decoding or, in the case of surface codes, under minimum weight perfect matching (MWPM) decoding. Our findings lead to analytical formulas for the performance of generic stabilizer codes, including the Shor code, the Steane code, as well as surface codes. For example, on a depolarizing channel with physical error rate ρ0\rho \to 0 it is found that the logical error rate ρL\rho_\mathrm{L} is asymptotically ρL16.2ρ2\rho_\mathrm{L} \to 16.2 \rho^2 for the [[9,1,3]][[9,1,3]] Shor code, ρL16.38ρ2\rho_\mathrm{L} \to 16.38 \rho^2 for the [[7,1,3]][[7,1,3]] Steane code, ρL18.74ρ2\rho_\mathrm{L} \to 18.74 \rho^2 for the [[13,1,3]][[13,1,3]] surface code, and ρL149.24ρ3\rho_\mathrm{L} \to 149.24 \rho^3 for the [[41,1,5]][[41,1,5]] surface code.Comment: 25 pages, 5 figures, submitted to an IEEE journal. arXiv admin note: substantial text overlap with arXiv:2302.1301

    Quantum weight enumerators and tensor networks

    Full text link
    We examine the use of weight enumerators for analyzing tensor network constructions, and specifically the quantum lego framework recently introduced. We extend the notion of quantum weight enumerators to so-called tensor enumerators, and prove that the trace operation on tensor networks is compatible with a trace operation on tensor enumerators. This allows us to compute quantum weight enumerators of larger codes such as the ones constructed through tensor network methods more efficiently. We also provide an analogue of the MacWilliams identity for tensor enumerators.Comment: 21 pages, 3 figures. Sets up the tensor enumerator formalis

    Equivalence of Classical and Quantum Codes

    Get PDF
    In classical and quantum information theory there are different types of error-correcting codes being used. We study the equivalence of codes via a classification of their isometries. The isometries of various codes over Frobenius alphabets endowed with various weights typically have a rich and predictable structure. On the other hand, when the alphabet is not Frobenius the isometry group behaves unpredictably. We use character theory to develop a duality theory of partitions over Frobenius bimodules, which is then used to study the equivalence of codes. We also consider instances of codes over non-Frobenius alphabets and establish their isometry groups. Secondly, we focus on quantum stabilizer codes over local Frobenius rings. We estimate their minimum distance and conjecture that they do not underperform quantum stabilizer codes over fields. We introduce symplectic isometries. Isometry groups of binary quantum stabilizer codes are established and then applied to the LU-LC conjecture

    Shor–Laflamme distributions of graph states and noise robustness of entanglement

    Get PDF
    The Shor–Laflamme distribution (SLD) of a quantum state is a collection of local unitary invariants that quantify k-body correlations. We show that the SLD of graph states can be derived by solving a graph-theoretical problem. In this way, the mean and variance of the SLD are obtained as simple functions of efficiently computable graph properties. Furthermore, this formulation enables us to derive closed expressions of SLDs for some graph state families. For cluster states, we observe that the SLD is very similar to a binomial distribution, and we argue that this property is typical for graph states in general. Finally, we derive an SLD-based entanglement criterion from the purity criterion and apply it to derive meaningful noise thresholds for entanglement. Our new entanglement criterion is easy to use and also applies to the case of higher-dimensional qudits. In the bigger picture, our results foster the understanding both of quantum error-correcting codes, where a closely related notion of SLDs plays an important role, and of the geometry of quantum states, where SLDs are known as sector length distributions

    Sector length distributions of graph states

    Full text link
    The sector length distribution (SLD) of a quantum state is a collection of local unitary invariants that quantify kk-body correlations. We show that the SLD of graph states can be derived by solving a graph-theoretical problem. In this way, the mean and variance of the SLD are obtained as simple functions of efficiently computable graph properties. Furthermore, this formulation enables us to derive closed expressions of SLDs for some graph state families. For cluster states, we observe that the SLD is very similar to a binomial distribution, and we argue that this property is typical for graph states in general. Finally, we derive an SLD-based entanglement criterion from the majorization criterion and apply it to derive meaningful noise thresholds for entanglement.Comment: 20+21 pages, 8+8 figure

    Towards Fault-Tolerant Quantum Computation with Higher-Dimensional Systems

    Get PDF
    The main focus of this thesis is to explore the advantages of using higher-dimensional quantum systems (qudits) as building blocks for fault-tolerant quantum computation. In particular, we investigate the two main essential ingredients of many state-of-the-art fault-tolerant schemes [133], which are magic state distillation and topological error correction. The theory for both of these components is well established for the qubit case, but little has been known for the generalised qudit case. For magic state distillation, we first present a general numerical approach that can be used to investigate the distillation properties of any stabilizer code. We use this approach to study small threedimensional (qutrit) codes and classify, for the first time, new types of qutrit magic states. We then provide an analytic study of a family of distillation protocols based on the quantum Reed-Muller codes. We discover a particular five-dimensional code that, by many measures, outperforms all known qubit codes. For the topological error correction, we study the qudit toric code serving as a quantum memory. For this purpose we examine an efficient renormalization group decoder to estimate the error correction threshold. We find that when the qudit toric code is subject to a generalised bit-flip noise, and for a sufficiently high dimension, a threshold of 30% can be obtained under perfect decoding

    Part I:

    Get PDF

    Trellis Decoding And Applications For Quantum Error Correction

    Get PDF
    Compact, graphical representations of error-correcting codes called trellises are a crucial tool in classical coding theory, establishing both theoretical properties and performance metrics for practical use. The idea was extended to quantum error-correcting codes by Ollivier and Tillich in 2005. Here, we use their foundation to establish a practical decoder able to compute the maximum-likely error for any stabilizer code over a finite field of prime dimension. We define a canonical form for the stabilizer group and use it to classify the internal structure of the graph. Similarities and differences between the classical and quantum theories are discussed throughout. Numerical results are presented which match or outperform current state-of-the-art decoding techniques. New construction techniques for large trellises are developed and practical implementations discussed. We then define a dual trellis and use algebraic graph theory to solve the maximum-likely coset problem for any stabilizer code over a finite field of prime dimension at minimum added cost. Classical trellis theory makes occasional theoretical use of a graph product called the trellis product. We establish the relationship between the trellis product and the standard graph products and use it to provide a closed form expression for the resulting graph, allowing it to be used in practice. We explore its properties and classify all idempotents. The special structure of the trellis allows us to present a factorization procedure for the product, which is much simpler than that of the standard products. Finally, we turn to an algorithmic study of the trellis and explore what coding-theoretic information can be extracted assuming no other information about the code is available. In the process, we present a state-of-the-art algorithm for computing the minimum distance for any stabilizer code over a finite field of prime dimension. We also define a new weight enumerator for stabilizer codes over F_2 incorporating the phases of each stabilizer and provide a trellis-based algorithm to compute it.Ph.D

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
    corecore