68,390 research outputs found
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Construction algorithm for network error-correcting codes attaining the Singleton bound
We give a centralized deterministic algorithm for constructing linear network
error-correcting codes that attain the Singleton bound of network
error-correcting codes. The proposed algorithm is based on the algorithm by
Jaggi et al. We give estimates on the time complexity and the required symbol
size of the proposed algorithm. We also estimate the probability of a random
choice of local encoding vectors by all intermediate nodes giving a network
error-correcting codes attaining the Singleton bound. We also clarify the
relationship between the robust network coding and the network error-correcting
codes with known locations of errors.Comment: To appear in IEICE Trans. Fundamentals
(http://ietfec.oxfordjournals.org/), vol. E90-A, no. 9, Sept. 2007. LaTeX2e,
7 pages, using ieice.cls and pstricks.sty. Version 4 adds randomized
construction of network error-correcting codes, comparisons of the proposed
methods to the existing methods, additional explanations in the proo
Stabilizer codes from modified symplectic form
Stabilizer codes form an important class of quantum error correcting codes
which have an elegant theory, efficient error detection, and many known
examples. Constructing stabilizer codes of length is equivalent to
constructing subspaces of which are
"isotropic" under the symplectic bilinear form defined by . As a
result, many, but not all, ideas from the theory of classical error correction
can be translated to quantum error correction. One of the main theoretical
contribution of this article is to study stabilizer codes starting with a
different symplectic form.
In this paper, we concentrate on cyclic codes. Modifying the symplectic form
allows us to generalize the previous known construction for linear cyclic
stabilizer codes, and in the process, circumvent some of the Galois theoretic
no-go results proved there. More importantly, this tweak in the symplectic form
allows us to make use of well known error correcting algorithms for cyclic
codes to give efficient quantum error correcting algorithms. Cyclicity of error
correcting codes is a "basis dependent" property. Our codes are no more
"cyclic" when they are derived using the standard symplectic forms (if we
ignore the error correcting properties like distance, all such symplectic forms
can be converted to each other via a basis transformation). Hence this change
of perspective is crucial from the point of view of designing efficient
decoding algorithm for these family of codes. In this context, recall that for
general codes, efficient decoding algorithms do not exist if some widely
believed complexity theoretic assumptions are true
Local Testing for Membership in Lattices
Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21)
MDS array codes for correcting a signle criss-cross error
We present a family of maximum-distance separable (MDS) array codes of size (p-1)×(p-1), p a prime number, and minimum criss-cross distance 3, i.e., the code is capable of correcting any row or column in error, without a priori knowledge of what type of error occurred. The complexity of the encoding and decoding algorithms is lower than that of known codes with the same error-correcting power, since our algorithms are based on exclusive-OR operations over lines of different slopes, as opposed to algebraic operations over a finite field. We also provide efficient encoding and decoding algorithms for errors and erasures
Update-Efficient Regenerating Codes with Minimum Per-Node Storage
Regenerating codes provide an efficient way to recover data at failed nodes
in distributed storage systems. It has been shown that regenerating codes can
be designed to minimize the per-node storage (called MSR) or minimize the
communication overhead for regeneration (called MBR). In this work, we propose
a new encoding scheme for [n,d] error- correcting MSR codes that generalizes
our earlier work on error-correcting regenerating codes. We show that by
choosing a suitable diagonal matrix, any generator matrix of the [n,{\alpha}]
Reed-Solomon (RS) code can be integrated into the encoding matrix. Hence, MSR
codes with the least update complexity can be found. An efficient decoding
scheme is also proposed that utilizes the [n,{\alpha}] RS code to perform data
reconstruction. The proposed decoding scheme has better error correction
capability and incurs the least number of node accesses when errors are
present.Comment: Submitted to IEEE ISIT 201
Concatenated Quantum Codes Constructible in Polynomial Time: Efficient Decoding and Error Correction
A method for concatenating quantum error-correcting codes is presented. The
method is applicable to a wide class of quantum error-correcting codes known as
Calderbank-Shor-Steane (CSS) codes. As a result, codes that achieve a high rate
in the Shannon theoretic sense and that are decodable in polynomial time are
presented. The rate is the highest among those known to be achievable by CSS
codes. Moreover, the best known lower bound on the greatest minimum distance of
codes constructible in polynomial time is improved for a wide range.Comment: 16 pages, 3 figures. Ver.4: Title changed. Ver.3: Due to a request of
the AE of the journal, the present version has become a combination of
(thoroughly revised) quant-ph/0610194 and the former quant-ph/0610195.
Problem formulations of polynomial complexity are strictly followed. An
erroneous instance of a lower bound on minimum distance was remove
Improved Decoding of Staircase Codes: The Soft-aided Bit-marking (SABM) Algorithm
Staircase codes (SCCs) are typically decoded using iterative bounded-distance
decoding (BDD) and hard decisions. In this paper, a novel decoding algorithm is
proposed, which partially uses soft information from the channel. The proposed
algorithm is based on marking certain number of highly reliable and highly
unreliable bits. These marked bits are used to improve the
miscorrection-detection capability of the SCC decoder and the error-correcting
capability of BDD. For SCCs with -error-correcting
Bose-Chaudhuri-Hocquenghem component codes, our algorithm improves upon
standard SCC decoding by up to ~dB at a bit-error rate (BER) of
. The proposed algorithm is shown to achieve almost half of the gain
achievable by an idealized decoder with this structure. A complexity analysis
based on the number of additional calls to the component BDD decoder shows that
the relative complexity increase is only around at a BER of .
This additional complexity is shown to decrease as the channel quality
improves. Our algorithm is also extended (with minor modifications) to product
codes. The simulation results show that in this case, the algorithm offers
gains of up to ~dB at a BER of .Comment: 10 pages, 12 figure
- …