8 research outputs found
Intrinsic Capacity
Every channel can be expressed as a convex combination of deterministic
channels with each deterministic channel corresponding to one particular
intrinsic state. Such convex combinations are in general not unique, each
giving rise to a specific intrinsic-state distribution. In this paper we study
the maximum and the minimum capacities of a channel when the realization of its
intrinsic state is causally available at the encoder and/or the decoder.
Several conclusive results are obtained for binary-input channels and
binary-output channels. Byproducts of our investigation include a
generalization of the Birkhoff-von Neumann theorem and a condition on the
uselessness of causal state information at the encoder.Comment: v0.6.3.677d35, 28 pages, 5 figures, submitted for publication, to be
presented in part at ISIT 201
A Technique for Deriving One-Shot Achievability Results in Network Information Theory
This paper proposes a novel technique to prove a one-shot version of
achievability results in network information theory. The technique is not based
on covering and packing lemmas. In this technique, we use an stochastic encoder
and decoder with a particular structure for coding that resembles both the ML
and the joint-typicality coders. Although stochastic encoders and decoders do
not usually enhance the capacity region, their use simplifies the analysis. The
Jensen inequality lies at the heart of error analysis, which enables us to deal
with the expectation of many terms coming from stochastic encoders and decoders
at once. The technique is illustrated via several examples: point-to-point
channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung,
Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel
coding over a MAC. Most of our one-shot results are new. The asymptotic forms
of these expressions is the same as that of classical results. Our one-shot
bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results
in the finite blocklength regime. In particular applying the one-shot result
for the memoryless broadcast channel in the asymptotic case, we get the entire
region of Marton's inner bound without any need for time-sharing.Comment: A short version has been submitted to ISIT 201
A Source-Channel Separation Theorem with Application to the Source Broadcast Problem
A converse method is developed for the source broadcast problem.
Specifically, it is shown that the separation architecture is optimal for a
variant of the source broadcast problem and the associated source-channel
separation theorem can be leveraged, via a reduction argument, to establish a
necessary condition for the original problem, which unifies several existing
results in the literature. Somewhat surprisingly, this method, albeit based on
the source-channel separation theorem, can be used to prove the optimality of
non-separation based schemes and determine the performance limits in certain
scenarios where the separation architecture is suboptimal.Comment: 10 page
Robust Distributed Compression of Symmetrically Correlated Gaussian Sources
Consider a lossy compression system with distributed encoders and a
centralized decoder. Each encoder compresses its observed source and forwards
the compressed data to the decoder for joint reconstruction of the target
signals under the mean squared error distortion constraint. It is assumed that
the observed sources can be expressed as the sum of the target signals and the
corruptive noises, which are generated independently from two symmetric
multivariate Gaussian distributions. Depending on the parameters of such
distributions, the rate-distortion limit of this system is characterized either
completely or at least for sufficiently low distortions. The results are
further extended to the robust distributed compression setting, where the
outputs of a subset of encoders may also be used to produce a non-trivial
reconstruction of the corresponding target signals. In particular, we obtain in
the high-resolution regime a precise characterization of the minimum achievable
reconstruction distortion based on the outputs of or more encoders when
every out of all encoders are operated collectively in the same mode
that is greedy in the sense of minimizing the distortion incurred by the
reconstruction of the corresponding target signals with respect to the
average rate of these encoders
Lattice-based Robust Distributed Source Coding
In this paper, we propose a lattice-based robust distributed source coding
system for two correlated sources and provide a detailed performance analysis
under the high resolution assumption. It is shown, among other things, that, in
the asymptotic regime where 1) the side distortion approaches 0 and 2) the
ratio between the central and side distortions approaches 0, our scheme is
capable of achieving the information-theoretic limit of quadratic multiple
description coding when the two sources are identical, whereas a variant of the
random coding scheme by Chen and Berger with Gaussian codes has a performance
loss of 0.5 bits relative to this limit
Combinatorial Message Sharing and a New Achievable Region for Multiple Descriptions
This paper presents a new achievable rate-distortion region for the general L
channel multiple descriptions problem. A well known general region for this
problem is due to Venkataramani, Kramer and Goyal (VKG) [1]. Their encoding
scheme is an extension of the El-Gamal-Cover (EC) and Zhang- Berger (ZB) coding
schemes to the L channel case and includes a combinatorial number of refinement
codebooks, one for each subset of the descriptions. As in ZB, the scheme also
allows for a single common codeword to be shared by all descriptions. This
paper proposes a novel encoding technique involving Combinatorial Message
Sharing (CMS), where every subset of the descriptions may share a distinct
common message. This introduces a combinatorial number of shared codebooks
along with the refinement codebooks of [1]. We derive an achievable
rate-distortion region for the proposed technique, and show that it subsumes
the VKG region for general sources and distortion measures. We further show
that CMS provides a strict improvement of the achievable region for any source
and distortion measures for which some 2-description subset is such that ZB
achieves points outside the EC region. We then show a more surprising result:
CMS outperforms VKG for a general class of sources and distortion measures,
including scenarios where the ZB and EC regions coincide for all 2-description
subsets. In particular, we show that CMS strictly improves on VKG, for the
L-channel quadratic Gaussian MD problem, for all L greater than or equal to 3,
despite the fact that the EC region is complete for the corresponding
2-descriptions problem. Using the encoding principles derived, we show that the
CMS scheme achieves the complete rate-distortion region for several asymmetric
cross-sections of the L-channel quadratic Gaussian MD problem
An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes
We consider the problem of multiple descriptions (MD) source coding and
propose new coding strategies involving both unstructured and structured coding
layers. Previously, the most general achievable rate-distortion (RD) region for
the -descriptions problem was the Combinatorial Message Sharing with Binning
(CMSB) region. The CMSB scheme utilizes unstructured quantizers and
unstructured binning. In the first part of the paper, we show that this
strategy can be improved upon using more general unstructured quantizers and a
more general unstructured binning method. In the second part, structured coding
strategies are considered. First, structured coding strategies are developed by
considering specific MD examples involving three or more descriptions. We show
that application of structured quantizers results in strict RD improvements
when there are more than two descriptions. Furthermore, we show that structured
binning also yields improvements. These improvements are in addition to the
ones derived in the first part of the paper. This suggests that structured
coding is essential when coding over more than two descriptions. Using the
ideas developed through these examples we provide a new unified coding strategy
by considering several structured coding layers. Finally, we characterize its
performance in the form of an inner bound to the optimal rate-distortion region
using computable single-letter information quantities. The new RD region
strictly contains all of the previous known achievable regions
On the Role of the Refinement Layer in Multiple Description Coding and Scalable Coding
Abstract—We clarify the relationship among several existing achievable multiple description rate-distortion regions by investigating the role of refinement layer in multiple description coding. Specifically, we show that the refinement layer in the El Gamal-Cover (EGC) scheme and the Venkataramani–Kramer–Goyal (VKG) scheme can be removed; as a consequence, the EGC region is equivalent to the EGC * region (an antecedent version of the EGC region) while the VKG region (when specialized to the 2-description case) is equivalent to the Zhang–Berger (ZB) region. Moreover, we prove that for multiple description coding with individual and hierarchical distortion constraints, the number of layers in the VKG scheme can be significantly reduced when only certain weighted sum rates are concerned. The role of refinement layer in scalable coding (a special case of multiple description coding) is also studied. Index Terms—Contra-polymatroid, multiple description coding, rate-distortion region, scalable coding, successive refinement. I