918 research outputs found

    The hardness of decoding linear codes with preprocessing

    Get PDF
    The problem of maximum-likelihood decoding of linear block codes is known to be hard. The fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown. The hardness is based on the fact that existence of a polynomial-time algorithm implies that the polynomial hierarchy collapses. Thus, some linear block codes probably do not have an efficient decoder. The proof is based on results in complexity theory that relate uniform and nonuniform complexity classes

    Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard

    Full text link
    Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum- likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.Comment: 16 pages, no figure

    On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: A New Perspective

    Get PDF
    The problem of exact maximum-likelihood (ML) decoding of general linear codes is well-known to be NP-hard. In this paper, we show that exact ML decoding of a class of asymptotically good low density parity check codes — expander codes — over binary symmetric channels (BSCs) is possible with an average-case polynomial complexity. This offers a new way of looking at the complexity issue of exact ML decoding for communication systems where the randomness in channel plays a fundamental central role. More precisely, for any bit-flipping probability p in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes which achieve error probability exponentially decaying in coding length n while admitting exact ML decoding in average-case polynomial time. As p approaches zero, this rate region approaches the Shannon channel capacity region. Similar results can be extended to AWGN channels, suggesting it may be feasible to eliminate the error floor phenomenon associated with belief-propagation decoding of LDPC codes in the high SNR regime. The derivations are based on a hierarchy of ML certificate decoding algorithms adaptive to the channel realization. In this process, we propose an efficient O(n^2) new ML certificate algorithm based on the max-flow algorithm. Moreover, exact ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and average-case complexities

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2Îł)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γ≈O(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γ≈O(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity

    Full text link
    The work identifies the first lattice decoding solution that achieves, in the general outage-limited MIMO setting and in the high-rate and high-SNR limit, both a vanishing gap to the error-performance of the (DMT optimal) exact solution of preprocessed lattice decoding, as well as a computational complexity that is subexponential in the number of codeword bits. The proposed solution employs lattice reduction (LR)-aided regularized (lattice) sphere decoding and proper timeout policies. These performance and complexity guarantees hold for most MIMO scenarios, all reasonable fading statistics, all channel dimensions and all full-rate lattice codes. In sharp contrast to the above manageable complexity, the complexity of other standard preprocessed lattice decoding solutions is shown here to be extremely high. Specifically the work is first to quantify the complexity of these lattice (sphere) decoding solutions and to prove the surprising result that the complexity required to achieve a certain rate-reliability performance, is exponential in the lattice dimensionality and in the number of codeword bits, and it in fact matches, in common scenarios, the complexity of ML-based solutions. Through this sharp contrast, the work was able to, for the first time, rigorously quantify the pivotal role of lattice reduction as a special complexity reducing ingredient. Finally the work analytically refines transceiver DMT analysis which generally fails to address potentially massive gaps between theory and practice. Instead the adopted vanishing gap condition guarantees that the decoder's error curve is arbitrarily close, given a sufficiently high SNR, to the optimal error curve of exact solutions, which is a much stronger condition than DMT optimality which only guarantees an error gap that is subpolynomial in SNR, and can thus be unbounded and generally unacceptable in practical settings.Comment: 16 pages - submission for IEEE Trans. Inform. Theor

    On the Closest Vector Problem with a Distance Guarantee

    Get PDF
    We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any α<1/2\alpha < 1/2, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most α\alpha times the length of the shortest nonzero lattice vector, requires as preprocessing advice only N≈O~(nexp⁥(α2n/(1−2α)2))N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2)) vectors, and runs in time O~(nN)\widetilde{O}(nN). As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of O(n/log⁥n)O(n/\sqrt{\log n}) for search CVPP, improving on the previous best of O(n1.5)O(n^{1.5}) due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing". Conference on Computational Complexity (2014
    • 

    corecore