2,512 research outputs found

    Polytope of Correct (Linear Programming) Decoding and Low-Weight Pseudo-Codewords

    Full text link
    We analyze Linear Programming (LP) decoding of graphical binary codes operating over soft-output, symmetric and log-concave channels. We show that the error-surface, separating domain of the correct decoding from domain of the erroneous decoding, is a polytope. We formulate the problem of finding the lowest-weight pseudo-codeword as a non-convex optimization (maximization of a convex function) over a polytope, with the cost function defined by the channel and the polytope defined by the structure of the code. This formulation suggests new provably convergent heuristics for finding the lowest weight pseudo-codewords improving in quality upon previously discussed. The algorithm performance is tested on the example of the Tanner [155, 64, 20] code over the Additive White Gaussian Noise (AWGN) channel.Comment: 6 pages, 2 figures, accepted for IEEE ISIT 201

    On Universal Properties of Capacity-Approaching LDPC Ensembles

    Full text link
    This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.Comment: Published in the IEEE Trans. on Information Theory, vol. 55, no. 7, pp. 2956 - 2990, July 200

    Low-Complexity Approaches to Slepian–Wolf Near-Lossless Distributed Data Compression

    Get PDF
    This paper discusses the Slepian–Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “source-splitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “min-sum” iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
    • …
    corecore