51 research outputs found

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    A Simple Message-Passing Algorithm for Compressed Sensing

    Full text link
    We consider the recovery of a nonnegative vector x from measurements y = Ax, where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when A corresponds to the adjacency matrix of a bipartite graph with sufficient expansion, a simple message-passing algorithm produces an estimate \hat{x} of x satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k)) computation in total, and the number of measurements required is m = O(k log(n/k)). In the special case when x is k-sparse, the algorithm recovers x exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step in the direction of more formally developing the broader role of message-passing algorithms in solving compressed sensing problems

    On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: A New Perspective

    Get PDF
    The problem of exact maximum-likelihood (ML) decoding of general linear codes is well-known to be NP-hard. In this paper, we show that exact ML decoding of a class of asymptotically good low density parity check codes — expander codes — over binary symmetric channels (BSCs) is possible with an average-case polynomial complexity. This offers a new way of looking at the complexity issue of exact ML decoding for communication systems where the randomness in channel plays a fundamental central role. More precisely, for any bit-flipping probability p in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes which achieve error probability exponentially decaying in coding length n while admitting exact ML decoding in average-case polynomial time. As p approaches zero, this rate region approaches the Shannon channel capacity region. Similar results can be extended to AWGN channels, suggesting it may be feasible to eliminate the error floor phenomenon associated with belief-propagation decoding of LDPC codes in the high SNR regime. The derivations are based on a hierarchy of ML certificate decoding algorithms adaptive to the channel realization. In this process, we propose an efficient O(n^2) new ML certificate algorithm based on the max-flow algorithm. Moreover, exact ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and average-case complexities

    Linear Programming Decoding of Spatially Coupled Codes

    Full text link
    For a given family of spatially coupled codes, we prove that the LP threshold on the BSC of the graph cover ensemble is the same as the LP threshold on the BSC of the derived spatially coupled ensemble. This result is in contrast with the fact that the BP threshold of the derived spatially coupled ensemble is believed to be larger than the BP threshold of the graph cover ensemble as noted by the work of Kudekar et al. (2011, 2012). To prove this, we establish some properties related to the dual witness for LP decoding which was introduced by Feldman et al. (2007) and simplified by Daskalakis et al. (2008). More precisely, we prove that the existence of a dual witness which was previously known to be sufficient for LP decoding success is also necessary and is equivalent to the existence of certain acyclic hyperflows. We also derive a sublinear (in the block length) upper bound on the weight of any edge in such hyperflows, both for regular LPDC codes and for spatially coupled codes and we prove that the bound is asymptotically tight for regular LDPC codes. Moreover, we show how to trade crossover probability for "LP excess" on all the variable nodes, for any binary linear code.Comment: 37 pages; Added tightness construction, expanded abstrac
    corecore