12 research outputs found

    Entanglement-assisted quantum low-density parity-check codes

    Get PDF
    This article develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error-correction performance, high rates, and low decoding complexity. The proposed method produces several infinite families of codes with a wide variety of parameters and entanglement requirements. Our framework encompasses the previously known entanglement-assisted quantum LDPC codes having the best error-correction performance and many other codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    The Physics of (good) LDPC Codes I. Gauging and dualities

    Full text link
    Low-depth parity check (LDPC) codes are a paradigm of error correction that allow for spatially non-local interactions between (qu)bits, while still enforcing that each (qu)bit interacts only with finitely many others. On expander graphs, they can give rise to ``good codes'' that combine a finite encoding rate with an optimal scaling of the code distance, which governs the code's robustness against noise. Such codes have garnered much recent attention due to two breakthrough developments: the construction of good quantum LDPC codes and good locally testable classical LDPC codes, using similar methods. Here we explore these developments from a physics lens, establishing connections between LDPC codes and ordered phases of matter defined for systems with non-local interactions and on non-Euclidean geometries. We generalize the physical notions of Kramers-Wannier (KW) dualities and gauge theories to this context, using the notion of chain complexes as an organizing principle. We discuss gauge theories based on generic classical LDPC codes and make a distinction between two classes, based on whether their excitations are point-like or extended. For the former, we describe KW dualities, analogous to the 1D Ising model and describe the role played by ``boundary conditions''. For the latter we generalize Wegner's duality to obtain generic quantum LDPC codes within the deconfined phase of a Z_2 gauge theory. We show that all known examples of good quantum LDPC codes are obtained by gauging locally testable classical codes. We also construct cluster Hamiltonians from arbitrary classical codes, related to the Higgs phase of the gauge theory, and formulate generalizations of the Kennedy-Tasaki duality transformation. We use the chain complex language to discuss edge modes and non-local order parameters for these models, initiating the study of SPT phases in non-Euclidean geometries

    Geometrically Local Quantum and Classical Codes from Subdivision

    Full text link
    A geometrically local quantum code is an error correcting code situated within RD\mathbb{R}^D, where the checks only act on qubits within a fixed spatial distance. The main question is: What is the optimal dimension and distance for a geometrically local code? This question was recently answered by Portnoy which constructed codes with optimal dimension and distance up to polylogs. This paper extends Portnoy's work by constructing a code which additionally has an optimal energy barrier up to polylogs. The key ingredient is a simpler code construction obtained by subdividing the balanced product codes. We also discuss applications to classical codes

    Non-Clifford and parallelizable fault-tolerant logical gates on constant and almost-constant rate homological quantum LDPC codes via higher symmetries

    Full text link
    We study parallel fault-tolerant quantum computing for families of homological quantum low-density parity-check (LDPC) codes defined on 3-manifolds with constant or almost-constant encoding rate. We derive generic formula for a transversal TT gate of color codes on general 3-manifolds, which acts as collective non-Clifford logical CCZ gates on any triplet of logical qubits with their logical-XX membranes having a Z2\mathbb{Z}_2 triple intersection at a single point. The triple intersection number is a topological invariant, which also arises in the path integral of the emergent higher symmetry operator in a topological quantum field theory: the Z23\mathbb{Z}_2^3 gauge theory. Moreover, the transversal SS gate of the color code corresponds to a higher-form symmetry supported on a codimension-1 submanifold, giving rise to exponentially many addressable and parallelizable logical CZ gates. We have developed a generic formalism to compute the triple intersection invariants for 3-manifolds and also study the scaling of the Betti number and systoles with volume for various 3-manifolds, which translates to the encoding rate and distance. We further develop three types of LDPC codes supporting such logical gates: (1) A quasi-hyperbolic code from the product of 2D hyperbolic surface and a circle, with almost-constant rate k/n=O(1/log(n))k/n=O(1/\log(n)) and O(log(n))O(\log(n)) distance; (2) A homological fibre bundle code with O(1/log12(n))O(1/\log^{\frac{1}{2}}(n)) rate and O(log12(n))O(\log^{\frac{1}{2}}(n)) distance; (3) A specific family of 3D hyperbolic codes: the Torelli mapping torus code, constructed from mapping tori of a pseudo-Anosov element in the Torelli subgroup, which has constant rate while the distance scaling is currently unknown. We then show a generic constant-overhead scheme for applying a parallelizable universal gate set with the aid of logical-XX measurements.Comment: 40 pages, 31 figure

    Topological Order, Quantum Codes and Quantum Computation on Fractal Geometries

    Full text link
    We investigate topological order on fractal geometries embedded in nn dimensions. In particular, we diagnose the existence of the topological order through the lens of quantum information and geometry, i.e., via its equivalence to a quantum error-correcting code with a macroscopic code distance or the presence of macroscopic systoles in systolic geometry. We first prove a no-go theorem that ZN\mathbb{Z}_N topological order cannot survive on any fractal embedded in 2D. For fractal lattice models embedded in 3D or higher spatial dimensions, ZN\mathbb{Z}_N topological order survives if the boundaries of the interior holes condense only loop or membrane excitations. Moreover, for a class of models containing only loop or membrane excitations, and are hence self-correcting on an nn-dimensional manifold, we prove that topological order survives on a large class of fractal geometries independent of the type of hole boundaries. We further construct fault-tolerant logical gates using their connection to global and higher-form topological symmetries. In particular, we have discovered a logical CCZ gate corresponding to a global symmetry in a class of fractal codes embedded in 3D with Hausdorff dimension asymptotically approaching DH=2+ϵD_H=2+\epsilon for arbitrarily small ϵ\epsilon, which hence only requires a space-overhead Ω(d2+ϵ)\Omega(d^{2+\epsilon}) with dd being the code distance. This in turn leads to the surprising discovery of certain exotic gapped boundaries that only condense the combination of loop excitations and gapped domain walls. We further obtain logical CpZ\text{C}^{p}\text{Z} gates with pn1p\le n-1 on fractal codes embedded in nnD. In particular, for the logical Cn1Z\text{C}^{n-1}\text{Z} in the nthn^\text{th} level of Clifford hierarchy, we can reduce the space overhead to Ω(dn1+ϵ)\Omega(d^{n-1+\epsilon}). Mathematically, our findings correspond to macroscopic relative systoles in fractals.Comment: 46+10 pages, fixed typos and the table content, updated funding informatio

    Codes correcteurs d’erreur quantique topologiques au-delà de la dimension 2

    Get PDF
    Error correction is the set of techniques used in order to store, process and transmit information reliably in a noisy context. The classical theory of error correction is based on encoding classical information redundantly. A major endeavor of the theory is to find optimal trade-offs between redundancy, which we try to minimize, and noise tolerance, which we try to maximize. The quantum theory of error correction cannot directly imitate the redundant schemes of the classical theory because it has to cope with the no-cloning theorem: quantum information cannot be copied. Quantum error correction is nonetheless possible by spreading the information on more quantum memory elements than would be necessary. In quantum information theory, dilution of the information replaces redundancy since copying is forbidden by the laws of quantum mechanics. Besides this conceptual difference, quantum error correction inherits a lot from its classical counterpart. In this PhD thesis, we are concerned with a class of quantum error correcting codes whose classical counterpart was defined in 1961 by Gallager [Gal62]. At that time, quantum information was not even a research domain yet. This class is the family of low density parity check (LDPC) codes. Informally, a code is said to be LDPC if the constraints imposed to ensure redundancy in the classical setting or dilution in the quantum setting are local. More precisely, this PhD thesis focuses on a subset of the LDPC quantum error correcting codes: the homological quantum error correcting codes. These codes take their name from the mathematical field of homology, whose objects of study are sequences of linear maps such that the kernel of a map contains the image of its left neighbour. Originally introduced to study the topology of geometric shapes, homology theory now encompasses more algebraic branches as well, where the focus is more abstract and combinatorial. The same is true of homological codes: they were introduced in 1997 by Kitaev [Kit03] with a quantum code that has the shape of a torus. They now form a vast family of quantum LDPC codes, some more inspired from geometry than others. Homological quantum codes were designed from spherical, Euclidean and hyperbolic geometries, from 2-dimensional, 3-dimensional and 4- dimensional objects, from objects with increasing and unbounded dimension and from hypergraph or homological products. After introducing some general quantum information concepts in the first chapter of this manuscript, we focus in the two following ones on families of quantum codes based on 4-dimensional hyperbolic objects. We highlight the interplay between their geometric side, given by manifolds, and their combinatorial side, given by abstract polytopes. We use both sides to analyze the corresponding quantum codes. In the fourth and last chapter we analyze a family of quantum codes based on spherical objects of arbitrary dimension. To have more flexibility in the design of quantum codes, we use combinatorial objects that realize this spherical geometry: hypercube complexes. This setting allows us to introduce a new link between classical and quantum error correction where classical codes are used to introduce homology in hypercube complexes.La mémoire quantique est constituée de matériaux présentant des effets quantiques comme la superposition. C’est cette possibilité de superposition qui distingue l’élément élémentaire de mémoire quantique, le qubit, de son analogue classique, le bit. Contrairement à un bit classique, un qubit peut être dans un état différent de l’état 0 et de l’état 1. Une difficulté majeure de la réalisation physique de mémoire quantique est la nécessité d’isoler le système utilisé de son environnement. En effet l’interaction d’un système quantique avec son environnement entraine un phénomène appelé décohérence qui se traduit par des erreurs sur l’état du système quantique. Dit autrement, à cause de la décohérence, il est possible que les qubits ne soient pas dans l’état dans lequel il est prévu qu’ils soient. Lorsque ces erreurs s’accumulent le résultat d’un calcul quantique a de grandes chances de ne pas être le résultat attendu. La correction d’erreur quantique est un ensemble de techniques permettant de protéger l’information quantique de ces erreurs. Elle consiste à réaliser un compromis entre le nombre de qubits et leur qualité. Plus précisément un code correcteur d’erreur permet à partir de N qubits physiques bruités de simuler un nombre plus petit K de qubits logiques, c’est-à-dire virtuels, moins bruités. La famille de codes la plus connue est sans doute celle découverte par le physicien Alexei Kitaev: le code torique. Cette construction peut être généralisée à des formes géométriques (variétés) autres qu’un tore. En 2014, Larry Guth et Alexander Lubotzky proposent une famille de code définie à partir de variétés hyperboliques de dimension 4 et montrent que cette famille fournit un compromis intéressant entre le nombre K de qubits logiques et le nombre d’erreurs qu’elle permet de corriger. Dans cette thèse, nous sommes partis de la construction de Guth et Lubotzky et en avons donné une version plus explicite et plus régulière. Pour définir un pavage régulier de l’espace hyperbolique de dimension 4, nous utilisons le groupe de symétrie de symbole de Schläfli {4, 3, 3, 5}. Nous en donnons la représentation matricielle correspondant au modèle de l’hyperboloïde et à un hypercube centré sur l’origine et dont les faces sont orthogonales aux quatre axes de coordonnée. Cette construction permet d’obtenir une famille de codes quantiques encodant un nombre de qubits logiques proportionnel au nombre de qubits physiques et dont la distance minimale croît au moins comme N0.1. Bien que ces paramètres soient également ceux de la construction de Guth et Lubotzky, la régularité de cette construction permet de construire explicitement des exemples de taille raisonnable et d’envisager des algorithmes de décodage qui exploitent cette régularité. Dans un second chapitre nous considérons une famille de codes quantiques hyperboliques 4D de symbole de Schläfli {5, 3, 3, 5}. Après avoir énoncé une façon de prendre le quotient des groupes correspondant en conservant la structure locale du groupe, nous construisons les matrices de parité correspondant à des codes quantiques ayant 144, 720, 9792, 18 000 et 90 000 qubits physiques. Nous appliquons un algorithme de Belief Propagation au décodage de ces codes et analysons les résultats numériquement. Dans un troisième et dernier chapitre nous définissons une nouvelle famille de codes quantiques à partir de cubes de dimension arbitrairement grande. En prenant le quotient d’un cube de dimension n par un code classique de paramètres [n, k, d] et en identifiant les qubits physiques avec les faces de dimension p du polytope quotient ainsi défini, on obtient un code quantique. Cette famille de codes quantiques a l’originalité de considérer des quotients par des codes classiques. En cela elle s’éloigne de la topologie et appartient plutôt à la famille des codes homologiques

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm
    corecore