8,640 research outputs found
A Universal Scheme for Wyner–Ziv Coding of Discrete Sources
We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes
How to Achieve the Capacity of Asymmetric Channels
We survey coding techniques that enable reliable transmission at rates that
approach the capacity of an arbitrary discrete memoryless channel. In
particular, we take the point of view of modern coding theory and discuss how
recent advances in coding for symmetric channels help provide more efficient
solutions for the asymmetric case. We consider, in more detail, three basic
coding paradigms.
The first one is Gallager's scheme that consists of concatenating a linear
code with a non-linear mapping so that the input distribution can be
appropriately shaped. We explicitly show that both polar codes and spatially
coupled codes can be employed in this scenario. Furthermore, we derive a
scaling law between the gap to capacity, the cardinality of the input and
output alphabets, and the required size of the mapper.
The second one is an integrated scheme in which the code is used both for
source coding, in order to create codewords distributed according to the
capacity-achieving input distribution, and for channel coding, in order to
provide error protection. Such a technique has been recently introduced by
Honda and Yamamoto in the context of polar codes, and we show how to apply it
also to the design of sparse graph codes.
The third paradigm is based on an idea of B\"ocherer and Mathar, and
separates the two tasks of source coding and channel coding by a chaining
construction that binds together several codewords. We present conditions for
the source code and the channel code, and we describe how to combine any source
code with any channel code that fulfill those conditions, in order to provide
capacity-achieving schemes for asymmetric channels. In particular, we show that
polar codes, spatially coupled codes, and homophonic codes are suitable as
basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published
in IEEE Trans. Inform. Theor
Universal coding for transmission of private information
We consider the scenario in which Alice transmits private classical messages
to Bob via a classical-quantum channel, part of whose output is intercepted by
an eavesdropper, Eve. We prove the existence of a universal coding scheme under
which Alice's messages can be inferred correctly by Bob, and yet Eve learns
nothing about them. The code is universal in the sense that it does not depend
on specific knowledge of the channel. Prior knowledge of the probability
distribution on the input alphabet of the channel, and bounds on the
corresponding Holevo quantities of the output ensembles at Bob's and Eve's end
suffice.Comment: 31 pages, no figures. Published versio
Zero-error channel capacity and simulation assisted by non-local correlations
Shannon's theory of zero-error communication is re-examined in the broader
setting of using one classical channel to simulate another exactly, and in the
presence of various resources that are all classes of non-signalling
correlations: Shared randomness, shared entanglement and arbitrary
non-signalling correlations. Specifically, when the channel being simulated is
noiseless, this reduces to the zero-error capacity of the channel, assisted by
the various classes of non-signalling correlations. When the resource channel
is noiseless, it results in the "reverse" problem of simulating a noisy channel
exactly by a noiseless one, assisted by correlations. In both cases, 'one-shot'
separations between the power of the different assisting correlations are
exhibited. The most striking result of this kind is that entanglement can
assist in zero-error communication, in stark contrast to the standard setting
of communicaton with asymptotically vanishing error in which entanglement does
not help at all. In the asymptotic case, shared randomness is shown to be just
as powerful as arbitrary non-signalling correlations for noisy channel
simulation, which is not true for the asymptotic zero-error capacities. For
assistance by arbitrary non-signalling correlations, linear programming
formulas for capacity and simulation are derived, the former being equal (for
channels with non-zero unassisted capacity) to the feedback-assisted zero-error
capacity originally derived by Shannon to upper bound the unassisted zero-error
capacity. Finally, a kind of reversibility between non-signalling-assisted
capacity and simulation is observed, mirroring the famous "reverse Shannon
theorem".Comment: 18 pages, 1 figure. Small changes to text in v2. Removed an
unnecessarily strong requirement in the premise of Theorem 1
Competitive minimax universal decoding for several ensembles of random codes
Universally achievable error exponents pertaining to certain families of
channels (most notably, discrete memoryless channels (DMC's)), and various
ensembles of random codes, are studied by combining the competitive minimax
approach, proposed by Feder and Merhav, with Chernoff bound and Gallager's
techniques for the analysis of error exponents. In particular, we derive a
single--letter expression for the largest, universally achievable fraction
of the optimum error exponent pertaining to the optimum ML decoding.
Moreover, a simpler single--letter expression for a lower bound to is
presented. To demonstrate the tightness of this lower bound, we use it to show
that , for the binary symmetric channel (BSC), when the random coding
distribution is uniform over: (i) all codes (of a given rate), and (ii) all
linear codes, in agreement with well--known results. We also show that
for the uniform ensemble of systematic linear codes, and for that of
time--varying convolutional codes in the bit-error--rate sense. For the latter
case, we also show how the corresponding universal decoder can be efficiently
implemented using a slightly modified version of the Viterbi algorithm which em
employs two trellises.Comment: 41 pages; submitted to IEEE Transactions on Information Theor
- …