2,987 research outputs found
Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard
Maximum-likelihood decoding is one of the central algorithmic problems in
coding theory. It has been known for over 25 years that maximum-likelihood
decoding of general linear codes is NP-hard. Nevertheless, it was so far
unknown whether maximum- likelihood decoding remains hard for any specific
family of codes with nontrivial algebraic structure. In this paper, we prove
that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon
codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes
remains hard even with unlimited preprocessing, thereby strengthening a result
of Bruck and Naor.Comment: 16 pages, no figure
Statistical Pruning for Near-Maximum Likelihood Decoding
In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance
New Set of Codes for the Maximum-Likelihood Decoding Problem
The maximum-likelihood decoding problem is known to be NP-hard for general
linear and Reed-Solomon codes. In this paper, we introduce the notion of
A-covered codes, that is, codes that can be decoded through a polynomial time
algorithm A whose decoding bound is beyond the covering radius. For these
codes, we show that the maximum-likelihood decoding problem is reachable in
polynomial time in the code parameters. Focusing on bi- nary BCH codes, we were
able to find several examples of A-covered codes, including two codes for which
the maximum-likelihood decoding problem can be solved in quasi-quadratic time.Comment: in Yet Another Conference on Cryptography, Porquerolle : France
(2010
Matched Metrics and Channels
The most common decision criteria for decoding are maximum likelihood
decoding and nearest neighbor decoding. It is well-known that maximum
likelihood decoding coincides with nearest neighbor decoding with respect to
the Hamming metric on the binary symmetric channel. In this work we study
channels and metrics for which those two criteria do and do not coincide for
general codes
Enhanced Recursive Reed-Muller Erasure Decoding
Recent work have shown that Reed-Muller (RM) codes achieve the erasure
channel capacity. However, this performance is obtained with maximum-likelihood
decoding which can be costly for practical applications. In this paper, we
propose an encoding/decoding scheme for Reed-Muller codes on the packet erasure
channel based on Plotkin construction. We present several improvements over the
generic decoding. They allow, for a light cost, to compete with
maximum-likelihood decoding performance, especially on high-rate codes, while
significantly outperforming it in terms of speed
On Low Complexity Maximum Likelihood Decoding of Convolutional Codes
This paper considers the average complexity of maximum likelihood (ML)
decoding of convolutional codes. ML decoding can be modeled as finding the most
probable path taken through a Markov graph. Integrated with the Viterbi
algorithm (VA), complexity reduction methods such as the sphere decoder often
use the sum log likelihood (SLL) of a Markov path as a bound to disprove the
optimality of other Markov path sets and to consequently avoid exhaustive path
search. In this paper, it is shown that SLL-based optimality tests are
inefficient if one fixes the coding memory and takes the codeword length to
infinity. Alternatively, optimality of a source symbol at a given time index
can be testified using bounds derived from log likelihoods of the neighboring
symbols. It is demonstrated that such neighboring log likelihood (NLL)-based
optimality tests, whose efficiency does not depend on the codeword length, can
bring significant complexity reduction to ML decoding of convolutional codes.
The results are generalized to ML sequence detection in a class of
discrete-time hidden Markov systems.Comment: Submitted to IEEE Transactions on Information Theor
- …