61,022 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Stabilizer codes from modified symplectic form

    Full text link
    Stabilizer codes form an important class of quantum error correcting codes which have an elegant theory, efficient error detection, and many known examples. Constructing stabilizer codes of length nn is equivalent to constructing subspaces of Fpn×Fpn\mathbb{F}_p^n \times \mathbb{F}_p^n which are "isotropic" under the symplectic bilinear form defined by ⟨(a,b),(c,d)⟩=aTd−bTc\left\langle (\mathbf{a},\mathbf{b}),(\mathbf{c},\mathbf{d}) \right\rangle = \mathbf{a}^{\mathrm{T}} \mathbf{d} - \mathbf{b}^{\mathrm{T}} \mathbf{c}. As a result, many, but not all, ideas from the theory of classical error correction can be translated to quantum error correction. One of the main theoretical contribution of this article is to study stabilizer codes starting with a different symplectic form. In this paper, we concentrate on cyclic codes. Modifying the symplectic form allows us to generalize the previous known construction for linear cyclic stabilizer codes, and in the process, circumvent some of the Galois theoretic no-go results proved there. More importantly, this tweak in the symplectic form allows us to make use of well known error correcting algorithms for cyclic codes to give efficient quantum error correcting algorithms. Cyclicity of error correcting codes is a "basis dependent" property. Our codes are no more "cyclic" when they are derived using the standard symplectic forms (if we ignore the error correcting properties like distance, all such symplectic forms can be converted to each other via a basis transformation). Hence this change of perspective is crucial from the point of view of designing efficient decoding algorithm for these family of codes. In this context, recall that for general codes, efficient decoding algorithms do not exist if some widely believed complexity theoretic assumptions are true

    Local Testing for Membership in Lattices

    Get PDF
    Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21)

    Entanglement-assisted quantum low-density parity-check codes

    Get PDF
    This paper develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes (EAQECCs) with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error correction performance, high rates, and low decoding complexity. The proposed method produces infinitely many new codes with a wide variety of parameters and entanglement requirements. Our framework encompasses various codes including the previously known entanglement-assisted quantum LDPC codes having the best error correction performance and many new codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.Comment: 20 pages, 5 figures. Final version appearing in Physical Review

    Some partial-unit-memory convolutional codes

    Get PDF
    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems

    High-Rate Quantum Low-Density Parity-Check Codes Assisted by Reliable Qubits

    Get PDF
    Quantum error correction is an important building block for reliable quantum information processing. A challenging hurdle in the theory of quantum error correction is that it is significantly more difficult to design error-correcting codes with desirable properties for quantum information processing than for traditional digital communications and computation. A typical obstacle to constructing a variety of strong quantum error-correcting codes is the complicated restrictions imposed on the structure of a code. Recently, promising solutions to this problem have been proposed in quantum information science, where in principle any binary linear code can be turned into a quantum error-correcting code by assuming a small number of reliable quantum bits. This paper studies how best to take advantage of these latest ideas to construct desirable quantum error-correcting codes of very high information rate. Our methods exploit structured high-rate low-density parity-check codes available in the classical domain and provide quantum analogues that inherit their characteristic low decoding complexity and high error correction performance even at moderate code lengths. Our approach to designing high-rate quantum error-correcting codes also allows for making direct use of other major syndrome decoding methods for linear codes, making it possible to deal with a situation where promising quantum analogues of low-density parity-check codes are difficult to find

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
    • …
    corecore