69 research outputs found
On some new approaches to practical Slepian-Wolf compression inspired by channel coding
This paper considers the problem, first introduced by Ahlswede and KĂśrner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and KĂśrner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y
Low-Complexity Approaches to SlepianâWolf Near-Lossless Distributed Data Compression
This paper discusses the SlepianâWolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple âsource-splittingâ strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the SlepianâWolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the âmin-sumâ iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable âexpanderâ-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance
Distributed coding using punctured quasi-arithmetic codes for memory and memoryless sources
This correspondence considers the use of punctured
quasi-arithmetic (QA) codes for the SlepianâWolf problem. These
entropy codes are defined by finite state machines for memoryless and
first-order memory sources. Puncturing an entropy coded bit-stream leads
to an ambiguity at the decoder side. The decoder makes use of a correlated
version of the original message in order to remove this ambiguity. A
complete distributed source coding (DSC) scheme based on QA encoding
with side information at the decoder is presented, together with iterative
structures based on QA codes. The proposed schemes are adapted to
memoryless and first-order memory sources. Simulation results reveal
that the proposed schemes are efficient in terms of decoding performance
for short sequences compared to well-known DSC solutions using channel
codes.Peer ReviewedPostprint (published version
Towards practical minimum-entropy universal decoding
Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance
- âŚ