81 research outputs found

    Side-Information Coding with Turbo Codes and its Application to Quantum Key Distribution

    Full text link
    Turbo coding is a powerful class of forward error correcting codes, which can achieve performances close to the Shannon limit. The turbo principle can be applied to the problem of side-information source coding, and we investigate here its application to the reconciliation problem occurring in a continuous-variable quantum key distribution protocol.Comment: 3 pages, submitted to ISITA 200

    Improved Modeling of the Correlation Between Continuous-Valued Sources in LDPC-Based DSC

    Full text link
    Accurate modeling of the correlation between the sources plays a crucial role in the efficiency of distributed source coding (DSC) systems. This correlation is commonly modeled in the binary domain by using a single binary symmetric channel (BSC), both for binary and continuous-valued sources. We show that "one" BSC cannot accurately capture the correlation between continuous-valued sources; a more accurate model requires "multiple" BSCs, as many as the number of bits used to represent each sample. We incorporate this new model into the DSC system that uses low-density parity-check (LDPC) codes for compression. The standard Slepian-Wolf LDPC decoder requires a slight modification so that the parameters of all BSCs are integrated in the log-likelihood ratios (LLRs). Further, using an interleaver the data belonging to different bit-planes are shuffled to introduce randomness in the binary domain. The new system has the same complexity and delay as the standard one. Simulation results prove the effectiveness of the proposed model and system.Comment: 5 Pages, 4 figures; presented at the Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 201

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    On some new approaches to practical Slepian-Wolf compression inspired by channel coding

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    LDPCA code construction for Slepian-Wolf coding

    Get PDF
    Error correcting codes used for Distributed Source Coding (DSC) generally assume a random distribution of errors. However, in certain DSC applications, prediction of the error distribution is possible and thus this assumption fails, resulting in a sub-optimal performance. This letter considers the construction of rate-adaptive Low-Density Parity-Check (LDPC) codes where the edges of the variable nodes receiving unreliable information are distributed evenly among all the check nodes. Simulation results show that the proposed codes can reduce the gap to the theoretical bounds by up to 56% compared to traditional codes.peer-reviewe

    Low-density parity-check codes for asymmetric distributed source coding

    Get PDF
    The research work is partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta). This scholarship is partly financed by the European Union - European Social Fund (ESF 1.25).Low-Density Parity-Check (LDPC) codes achieve good performance, tending towards the Slepian-Wolf bound, when used as channel codes in Distributed Source Coding (DSC). Most LDPC codes found in literature are designed assuming random distribution of transmission errors. However, certain DSC applications can predict the error location within a certain level of accuracy. This feature can be exploited in order to design application specific LDPC codes to enhance the performance of traditional LDPC codes. This paper proposes a novel architecture for asymmetric DSC where the encoder is able to estimate the location of the errors within the side information. It then interleaves the bits having a high probability of error to the beginning of the codeword. The LDPC codes are designed to provide a higher level of protection to the front bits. Simulation results show that correct localization of errors pushes the performance of the system on average 13.3% closer to the Slepian-Wolf bound, compared to the randomly constructed LDPC codes. If the error localization prediction fails, such that the errors are randomly distributed, the performance is still in line with that of the traditional DSC architecture.peer-reviewe

    Error Resilience Performance Evaluation of a Distributed Video Codec

    Get PDF
    Distributed Video Coding (DVC), one of the most active research field in the video coding community, is based on the combination of Slepian-Wolf coding techniques with the idea of performing the prediction at the decoder side rather than at the encoder side. Besides its main property, which is flexible allocation of computational complexity between encoder and decoder, the distributed approach has other interesting properties. One of the most promising DVC characteristics is its intrinsic robustness to transmission errors. In this work we have evaluated the error resilience performance of a video codec based on the DVC scheme proposed by Stanford, and we have carried out a preliminary comparison with traditional H.264 encoding, showing that at high error probabilities and high bitrates the distributed approach can also outperform the traditional one

    Low-Complexity Approaches to Slepian–Wolf Near-Lossless Distributed Data Compression

    Get PDF
    This paper discusses the Slepian–Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “source-splitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “min-sum” iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

    Lossless and near-lossless source coding for multiple access networks

    Get PDF
    A multiple access source code (MASC) is a source code designed for the following network configuration: a pair of correlated information sequences {X-i}(i=1)(infinity), and {Y-i}(i=1)(infinity) is drawn independent and identically distributed (i.i.d.) according to joint probability mass function (p.m.f.) p(x, y); the encoder for each source operates without knowledge of the other source; the decoder jointly decodes the encoded bit streams from both sources. The work of Slepian and Wolf describes all rates achievable by MASCs of infinite coding dimension (n --> infinity) and asymptotically negligible error probabilities (P-e((n)) --> 0). In this paper, we consider the properties of optimal instantaneous MASCs with finite coding dimension (n 0) performance. The interest in near-lossless codes is inspired by the discontinuity in the limiting rate region at P-e((n)) = 0 and the resulting performance benefits achievable by using near-lossless MASCs as entropy codes within lossy MASCs. Our central results include generalizations of Huffman and arithmetic codes to the MASC framework for arbitrary p(x, y), n, and P-e((n)) and polynomial-time design algorithms that approximate these optimal solutions
    corecore