2,124 research outputs found
Multiuser Successive Refinement and Multiple Description Coding
We consider the multiuser successive refinement (MSR) problem, where the
users are connected to a central server via links with different noiseless
capacities, and each user wishes to reconstruct in a successive-refinement
fashion. An achievable region is given for the two-user two-layer case and it
provides the complete rate-distortion region for the Gaussian source under the
MSE distortion measure. The key observation is that this problem includes the
multiple description (MD) problem (with two descriptions) as a subsystem, and
the techniques useful in the MD problem can be extended to this case. We show
that the coding scheme based on the universality of random binning is
sub-optimal, because multiple Gaussian side informations only at the decoders
do incur performance loss, in contrast to the case of single side information
at the decoder. We further show that unlike the single user case, when there
are multiple users, the loss of performance by a multistage coding approach can
be unbounded for the Gaussian source. The result suggests that in such a
setting, the benefit of using successive refinement is not likely to justify
the accompanying performance loss. The MSR problem is also related to the
source coding problem where each decoder has its individual side information,
while the encoder has the complete set of the side informations. The MSR
problem further includes several variations of the MD problem, for which the
specialization of the general result is investigated and the implication is
discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information
Theory. References updated and typos correcte
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
Sparse Linear Representation
This paper studies the question of how well a signal can be reprsented by a
sparse linear combination of reference signals from an overcomplete dictionary.
When the dictionary size is exponential in the dimension of signal, then the
exact characterization of the optimal distortion is given as a function of the
dictionary size exponent and the number of reference signals for the linear
representation. Roughly speaking, every signal is sparse if the dictionary size
is exponentially large, no matter how small the exponent is. Furthermore, an
iterative method similar to matching pursuit that successively finds the best
reference signal at each stage gives asymptotically optimal representations.
This method is essentially equivalent to successive refinement for multiple
descriptions and provides a simple alternative proof of the successive
refinability of white Gaussian sources.Comment: 5 pages, to appear in proc. IEEE ISIT, June 200
Successive Refinement of Shannon Cipher System Under Maximal Leakage
We study the successive refinement setting of Shannon cipher system (SCS)
under the maximal leakage constraint for discrete memoryless sources under
bounded distortion measures. Specifically, we generalize the threat model for
the point-to-point rate-distortion setting of Issa, Wagner and Kamath (T-IT
2020) to the multiterminal successive refinement setting. Under mild conditions
that correspond to partial secrecy, we characterize the asymptotically optimal
normalized maximal leakage region for both the joint excess-distortion
probability (JEP) and the expected distortion reliability constraints. Under
JEP, in the achievability part, we propose a type-based coding scheme, analyze
the reliability guarantee for JEP and bound the leakage of the information
source through compressed versions. In the converse part, by analyzing a
guessing scheme of the eavesdropper, we prove the optimality of our
achievability result. Under expected distortion, the achievability part is
established similarly to the JEP counterpart. The converse proof proceeds by
generalizing the corresponding results for the rate-distortion setting of SCS
by Schieler and Cuff (T-IT 2014) to the successive refinement setting. Somewhat
surprisingly, the normalized maximal leakage regions under both JEP and
expected distortion constraints are identical under certain conditions,
although JEP appears to be a stronger reliability constraint
Successive Refinement of Abstract Sources
In successive refinement of information, the decoder refines its
representation of the source progressively as it receives more encoded bits.
The rate-distortion region of successive refinement describes the minimum rates
required to attain the target distortions at each decoding stage. In this
paper, we derive a parametric characterization of the rate-distortion region
for successive refinement of abstract sources. Our characterization extends
Csiszar's result to successive refinement, and generalizes a result by Tuncel
and Rose, applicable for finite alphabet sources, to abstract sources. This
characterization spawns a family of outer bounds to the rate-distortion region.
It also enables an iterative algorithm for computing the rate-distortion
region, which generalizes Blahut's algorithm to successive refinement. Finally,
it leads a new nonasymptotic converse bound. In all the scenarios where the
dispersion is known, this bound is second-order optimal.
In our proof technique, we avoid Karush-Kuhn-Tucker conditions of optimality,
and we use basic tools of probability theory. We leverage the Donsker-Varadhan
lemma for the minimization of relative entropy on abstract probability spaces.Comment: Extended version of a paper presented at ISIT 201
- …