58 research outputs found

    Hardness of decoding quantum stabilizer codes

    Full text link
    In this article we address the computational hardness of optimally decoding a quantum stabilizer code. Much like classical linear codes, errors are detected by measuring certain check operators which yield an error syndrome, and the decoding problem consists of determining the most likely recovery given the syndrome. The corresponding classical problem is known to be NP-complete, and a similar decoding problem for quantum codes is also known to be NP-complete. However, this decoding strategy is not optimal in the quantum setting as it does not take into account error degeneracy, which causes distinct errors to have the same effect on the code. Here, we show that optimal decoding of stabilizer codes is computationally much harder than optimal decoding of classical linear codes, it is #P

    The Encoding and Decoding Complexities of Entanglement-Assisted Quantum Stabilizer Codes

    Full text link
    Quantum error-correcting codes are used to protect quantum information from decoherence. A raw state is mapped, by an encoding circuit, to a codeword so that the most likely quantum errors from a noisy quantum channel can be removed after a decoding process. A good encoding circuit should have some desired features, such as low depth, few gates, and so on. In this paper, we show how to practically implement an encoding circuit of gate complexity O(n(n−k+c)/log⁡n)O(n(n-k+c)/\log n) for an [[n,k;c]][[n,k;c]] quantum stabilizer code with the help of cc pairs of maximally-entangled states. For the special case of an [[n,k]][[n,k]] stabilizer code with c=0c=0, the encoding complexity is O(n(n−k)/log⁡n)O(n(n-k)/\log n), which is previously known to be O(n2/log⁡n)O(n^2/\log n). For c>0,c>0, this suggests that the benefits from shared entanglement come at an additional cost of encoding complexity. Finally we discuss decoding of entanglement-assisted quantum stabilizer codes and extend previously known computational hardness results on decoding quantum stabilizer codes.Comment: accepted by the 2019 IEEE International Symposium on Information Theory (ISIT2019

    Stabilizer codes from modified symplectic form

    Full text link
    Stabilizer codes form an important class of quantum error correcting codes which have an elegant theory, efficient error detection, and many known examples. Constructing stabilizer codes of length nn is equivalent to constructing subspaces of Fpn×Fpn\mathbb{F}_p^n \times \mathbb{F}_p^n which are "isotropic" under the symplectic bilinear form defined by ⟹(a,b),(c,d)⟩=aTd−bTc\left\langle (\mathbf{a},\mathbf{b}),(\mathbf{c},\mathbf{d}) \right\rangle = \mathbf{a}^{\mathrm{T}} \mathbf{d} - \mathbf{b}^{\mathrm{T}} \mathbf{c}. As a result, many, but not all, ideas from the theory of classical error correction can be translated to quantum error correction. One of the main theoretical contribution of this article is to study stabilizer codes starting with a different symplectic form. In this paper, we concentrate on cyclic codes. Modifying the symplectic form allows us to generalize the previous known construction for linear cyclic stabilizer codes, and in the process, circumvent some of the Galois theoretic no-go results proved there. More importantly, this tweak in the symplectic form allows us to make use of well known error correcting algorithms for cyclic codes to give efficient quantum error correcting algorithms. Cyclicity of error correcting codes is a "basis dependent" property. Our codes are no more "cyclic" when they are derived using the standard symplectic forms (if we ignore the error correcting properties like distance, all such symplectic forms can be converted to each other via a basis transformation). Hence this change of perspective is crucial from the point of view of designing efficient decoding algorithm for these family of codes. In this context, recall that for general codes, efficient decoding algorithms do not exist if some widely believed complexity theoretic assumptions are true

    Improved belief propagation decoding algorithm based on decoupling representation of Pauli operators for quantum LDPC codes

    Full text link
    We propose a new method called decoupling representation to represent Pauli operators as vectors over GF(2), based on which we propose partially decoupled belief propagation and fully decoupled belief propagation decoding algorithm for quantum low density parity-check codes. Under the assumption that there is no measurement error, compared with traditional belief propagation algorithm in symplectic representation over GF(2), within the same number of iterations, the decoding accuracy of partially decoupled belief propagation and fully decoupled belief propagation algorithm is significantly improved in pure Y noise channel and depolarizing noise channel, which supports that decoding algorithms of quantum error correcting codes might have better performance in decoupling representation than in symplectic representation. The impressive performance of fully decoupled belief propagation algorithm might promote the realization of quantum error correcting codes in engineering

    Hardness results for decoding the surface code with Pauli noise

    Full text link
    Real quantum computers will be subject to complicated, qubit-dependent noise, instead of simple noise such as depolarizing noise with the same strength for all qubits. We can do quantum error correction more effectively if our decoding algorithms take into account this prior information about the specific noise present. This motivates us to consider the complexity of surface code decoding where the input to the decoding problem is not only the syndrome-measurement results, but also a noise model in the form of probabilities of single-qubit Pauli errors for every qubit. In this setting, we show that Maximum Probability Error (MPE) decoding and Maximum Likelihood (ML) decoding for the surface code are NP-hard and #P-hard, respectively. We reduce directly from SAT for MPE decoding, and from #SAT for ML decoding, by showing how to transform a boolean formula into a qubit-dependent Pauli noise model and set of syndromes that encode the satisfiability properties of the formula. We also give hardness of approximation results for MPE and ML decoding. These are worst-case hardness results that do not contradict the empirical fact that many efficient surface code decoders are correct in the average case (i.e., for most sets of syndromes and for most reasonable noise models). These hardness results are nicely analogous with the known hardness results for MPE and ML decoding of arbitrary stabilizer codes with independent XX and ZZ noise.Comment: 37 pages, 18 figures. 26 pages, 12 figures in main tex

    Hardness of decoding stabilizer codes

    Get PDF
    RĂ©sumĂ© : Ce mĂ©moire porte sur l’étude de la complexitĂ© du problĂšme du dĂ©codage des codes stabilisateurs quantiques. Les trois premiers chapitres introduisent les notions nĂ©cessaires pour comprendre notre rĂ©sultat principal. D’abord, nous rappelons les bases de la thĂ©orie de la complexitĂ© et illustrons les concepts qui s’y rattachent Ă  l’aide d’exemples tirĂ©s de la physique. Ensuite, nous expliquons le problĂšme du dĂ©codage des codes correcteurs classiques. Nous considĂ©rons les codes linĂ©aires sur le canal binaire symĂ©trique et nous discutons du cĂ©lĂšbre rĂ©sultat de McEliece et al. [1]. Dans le troisiĂšme chapitre, nous Ă©tudions le problĂšme de la communication quantique sur des canaux de Pauli. Dans ce chapitre, nous introduisons le formalisme des codes stabilisateurs pour Ă©tudier la correction d’erreur quantique et mettons en Ă©vidence le concept de dĂ©gĂ©nĂ©rescence. Le problĂšme de dĂ©codage des codes stabilisateurs quantiques nĂ©gligeant la dĂ©gĂ©nĂ©rescence est appelĂ© «quantum maximum likelihood decoding»(QMLD). Il a Ă©tĂ© dĂ©montrĂ© que ce problĂšme est NP-complet par Min Hseiu Heish et al., dans [2]. Nous nous concentrons sur la stratĂ©gie optimale de dĂ©codage, appelĂ©e «degenerate quantum maximum likelihood decoding »(DQMLD), qui prend en compte la prĂ©sence de la dĂ©gĂ©nĂ©rescence et nous mettons en Ă©vidence quelques instances pour lesquelles les performances de ces deux mĂ©thodes diffĂšrent drastiquement. La contribution principale de ce mĂ©moire est de prouver que DQMLD est considĂ©rablement plus difficile que ce que les rĂ©sultats prĂ©cĂ©dents indiquaient. Dans le dernier chapitre, nous prĂ©sentons notre rĂ©sultat principal (Thm. 5.1.1), Ă©tablissant que DQMLD est #P-complet. Pour le prouver, nous dĂ©montrons que le problĂšme de l’évaluation de l’énumĂ©rateur de poids d’un code linĂ©aire, qui est #P-complet, se rĂ©duit au problĂšme DQMLD. Le rĂ©sultat principal de ce mĂ©moire est prĂ©sentĂ© sous forme d’article dans [3] et est prĂ©sentement considĂ©rĂ© pour publication dans IEEE Transactions on Information Theory. Nous montrons Ă©galement que, sous certaines conditions, les rĂ©sultats de QMLD et DQMLD coĂŻncident. Il s’agit d’une amĂ©lioration par rapport aux rĂ©sultats obtenus dans [4, 5]. // Abstract : This thesis deals with the study of computational complexity of decoding stabilizer codes. The first three chapters contain all the necessary background to understand the main result of this thesis. First, we explain the necessary notions in computational complexity, introducing P, NP, #P classes of problems, along with some examples intended for physicists. Then, we explain the decoding problem in classical error correction, for linear codes on the binary symmetric channel and discuss the celebrated result of Mcleicee et al., in [1]. In the third chapter, we study the problem of quantum communication, over Pauli channels. Here, using the stabilizer formalism, we discuss the concept of degenerate errors. The decoding problem for stabilizer codes, which simply neglects the presence of degenerate errors, is called quantum maximum likelihood decoding (QMLD) and it was shown to be NP-complete, by Min Hseiu Heish et al., in [2]. We focus on the problem of optimal decoding, called degenerate quantum maximum likelihood decoding (DQMLD), which accounts for the presence of degenerate errors. We will highlight some instances of stabilizer codes, where the presence of degenerate errors causes drastic variations between the performances of DQMLD and QMLD. The main contribution of this thesis is to demonstrate that the optimal decoding problem for stabilizer codes is much harder than what the previous results had anticipated. In the last chapter, we present our own result (in Thm. 5.1.1), establishing that the optimal decoding problem for stabilizer codes, is #P-complete. To prove this, we demonstrate that the problem of evaluating the weight enumerator of a binary linear code, which is #P-complete, can be reduced (in polynomial time) to the DQMLD problem, see (Sec. 5.1). Our principal result is also presented as an article in [3], which is currently under review for publication in IEEE Transactions on Information Theory. In addition to the main result, we also show that under certain conditions, the outputs of DQMLD and QMLD always agree. We consider the conditions developed by us to be an improvement over the ones in [4, 5]

    Exploiting Degeneracy in Belief Propagation Decoding of Quantum Codes

    Full text link
    Quantum information needs to be protected by quantum error-correcting codes due to imperfect physical devices and operations. One would like to have an efficient and high-performance decoding procedure for the class of quantum stabilizer codes. A potential candidate is Pearl's belief propagation (BP), but its performance suffers from the many short cycles inherent in a quantum stabilizer code, especially highly-degenerate codes. A general impression exists that BP is not effective for topological codes. In this paper, we propose a decoding algorithm for quantum codes based on quaternary BP with additional memory effects (called MBP). This MBP is like a recursive neural network with inhibitions between neurons (edges with negative weights), which enhance the perception capability of a network. Moreover, MBP exploits the degeneracy of a quantum code so that the most probable error or its degenerate errors can be found with high probability. The decoding performance is significantly improved over the conventional BP for various quantum codes, including quantum bicycle, hypergraph-product, surface and toric codes. For MBP on the surface and toric codes over depolarizing errors, we observe error thresholds of 16% and 17.5%, respectively.Comment: 22 pages, 25 figures, 3 tables, and 3 algorithm
    • 

    corecore