22,140 research outputs found
New Set of Codes for the Maximum-Likelihood Decoding Problem
The maximum-likelihood decoding problem is known to be NP-hard for general
linear and Reed-Solomon codes. In this paper, we introduce the notion of
A-covered codes, that is, codes that can be decoded through a polynomial time
algorithm A whose decoding bound is beyond the covering radius. For these
codes, we show that the maximum-likelihood decoding problem is reachable in
polynomial time in the code parameters. Focusing on bi- nary BCH codes, we were
able to find several examples of A-covered codes, including two codes for which
the maximum-likelihood decoding problem can be solved in quasi-quadratic time.Comment: in Yet Another Conference on Cryptography, Porquerolle : France
(2010
Adaptive Cut Generation Algorithm for Improved Linear Programming Decoding of Binary Linear Codes
Linear programming (LP) decoding approximates maximum-likelihood (ML)
decoding of a linear block code by relaxing the equivalent ML integer
programming (IP) problem into a more easily solved LP problem. The LP problem
is defined by a set of box constraints together with a set of linear
inequalities called "parity inequalities" that are derived from the constraints
represented by the rows of a parity-check matrix of the code and can be added
iteratively and adaptively. In this paper, we first derive a new necessary
condition and a new sufficient condition for a violated parity inequality
constraint, or "cut," at a point in the unit hypercube. Then, we propose a new
and effective algorithm to generate parity inequalities derived from certain
additional redundant parity check (RPC) constraints that can eliminate
pseudocodewords produced by the LP decoder, often significantly improving the
decoder error-rate performance. The cut-generating algorithm is based upon a
specific transformation of an initial parity-check matrix of the linear block
code. We also design two variations of the proposed decoder to make it more
efficient when it is combined with the new cut-generating algorithm. Simulation
results for several low-density parity-check (LDPC) codes demonstrate that the
proposed decoding algorithms significantly narrow the performance gap between
LP decoding and ML decoding
Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard
Maximum-likelihood decoding is one of the central algorithmic problems in
coding theory. It has been known for over 25 years that maximum-likelihood
decoding of general linear codes is NP-hard. Nevertheless, it was so far
unknown whether maximum- likelihood decoding remains hard for any specific
family of codes with nontrivial algebraic structure. In this paper, we prove
that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon
codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes
remains hard even with unlimited preprocessing, thereby strengthening a result
of Bruck and Naor.Comment: 16 pages, no figure
On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: A New Perspective
The problem of exact maximum-likelihood (ML) decoding of general linear codes is well-known to be NP-hard. In this paper, we show that exact ML decoding of a class of asymptotically good low density parity check codes — expander codes — over binary symmetric channels (BSCs) is possible with an average-case polynomial complexity. This offers a new way of looking at the complexity issue of exact ML decoding for communication systems where the randomness in channel plays a fundamental central role. More precisely, for any bit-flipping probability p in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes which achieve error probability exponentially decaying in coding length n while admitting exact ML decoding in average-case polynomial time. As p approaches zero, this rate region approaches the Shannon channel capacity region. Similar results can be extended to AWGN channels, suggesting it may be feasible to eliminate the error floor phenomenon associated with belief-propagation decoding of LDPC codes in the high SNR regime. The derivations are based on a hierarchy of ML certificate decoding algorithms adaptive to the channel realization. In this process, we propose an efficient O(n^2) new ML certificate algorithm based on the max-flow algorithm. Moreover, exact ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and average-case complexities
Efficient Maximum-Likelihood Decoding of Linear Block Codes on Binary Memoryless Channels
In this work, we consider efficient maximum-likelihood decoding of linear
block codes for small-to-moderate block lengths. The presented approach is a
branch-and-bound algorithm using the cutting-plane approach of Zhang and Siegel
(IEEE Trans. Inf. Theory, 2012) for obtaining lower bounds. We have compared
our proposed algorithm to the state-of-the-art commercial integer program
solver CPLEX, and for all considered codes our approach is faster for both low
and high signal-to-noise ratios. For instance, for the benchmark (155,64)
Tanner code our algorithm is more than 11 times as fast as CPLEX for an SNR of
1.0 dB on the additive white Gaussian noise channel. By a small modification,
our algorithm can be used to calculate the minimum distance, which we have
again verified to be much faster than using the CPLEX solver.Comment: Submitted to 2014 International Symposium on Information Theory. 5
Pages. Accepte
Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes
Just as the Hamming weight spectrum of a linear block code sheds light on the
performance of a maximum likelihood decoder, the pseudo-weight spectrum
provides insight into the performance of a linear programming decoder. Using
properties of polyhedral cones, we find the pseudo-weight spectrum of some
short codes. We also present two general lower bounds on the minimum
pseudo-weight. The first bound is based on the column weight of the
parity-check matrix. The second bound is computed by solving an optimization
problem. In some cases, this bound is more tractable to compute than previously
known bounds and thus can be applied to longer codes.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
- …