119 research outputs found

    Distributed coding using punctured quasi-arithmetic codes for memory and memoryless sources

    Get PDF
    This correspondence considers the use of punctured quasi-arithmetic (QA) codes for the Slepian–Wolf problem. These entropy codes are defined by finite state machines for memoryless and first-order memory sources. Puncturing an entropy coded bit-stream leads to an ambiguity at the decoder side. The decoder makes use of a correlated version of the original message in order to remove this ambiguity. A complete distributed source coding (DSC) scheme based on QA encoding with side information at the decoder is presented, together with iterative structures based on QA codes. The proposed schemes are adapted to memoryless and first-order memory sources. Simulation results reveal that the proposed schemes are efficient in terms of decoding performance for short sequences compared to well-known DSC solutions using channel codes.Peer ReviewedPostprint (published version

    Low-Complexity Approaches to Slepian–Wolf Near-Lossless Distributed Data Compression

    Get PDF
    This paper discusses the Slepian–Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “source-splitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “min-sum” iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

    TTCM-aided rate-adaptive distributed source coding for Rayleigh fading channels

    No full text
    Adaptive turbo-trellis-coded modulation (TTCM)-aided asymmetric distributed source coding (DSC) is proposed, where two correlated sources are transmitted to a destination node. The first source sequence is TTCM encoded and is further compressed before it is transmitted through a Rayleigh fading channel, whereas the second source signal is assumed to be perfectly decoded and, hence, to be flawlessly shown at the destination for exploitation as side information for improving the decoding performance of the first source. The proposed scheme is capable of reliable communications within 0.80 dB of the Slepian-Wolf/Shannon (SW/S) theoretical limit at a bit error rate (BER) of 10-5. Furthermore, its encoder is capable of accommodating time-variant short-term correlation between the two sources

    Non-linear graph-based codes for joint source-channel coding

    Get PDF
    We study the behavior of a new family of nonlinear graph-based codes, previously introduced for compression of asymmetric binary memoryless sources, for the joint source-channel coding scenario in which the codewords are transmitted through an additive white Gaussian noise channel. We focus on low entropy sources (with high redundancy) and compression rates. Monte Carlo simulation and density evolution results show that the proposed family, with a regular and simple parametrization of the degree profiles, outperforms linear codes.Peer ReviewedPostprint (published version

    Practical distributed source coding with impulse-noise degraded side information at the decoder

    Get PDF
    International audienceThis paper introduces a practical method for distributed lossy compression (Wyner-Ziv quantization) with side information available only at the decoder, where the side information is equal to the signal affected by background noise and additional impulse noise. At the core of the method is an LDPC-based lossless distributed (Slepian-Wolf) source code for q-ary alphabets, which is matched to the impulse probability and allows to remove the scalar-quantized impulse noise. Applications of this method to distributed compressed sensing of signals that differ in a sparse set of locations is also discussed, as well as some differences and similarities of variable- and fixed-length coding of sparse signals
    • …
    corecore