1,840 research outputs found
The Likelihood Encoder for Lossy Compression
A likelihood encoder is studied in the context of lossy source compression.
The analysis of the likelihood encoder is based on the soft-covering lemma. It
is demonstrated that the use of a likelihood encoder together with the
soft-covering lemma yields simple achievability proofs for classical source
coding problems. The cases of the point-to-point rate-distortion function, the
rate-distortion function with side information at the decoder (i.e. the
Wyner-Ziv problem), and the multi-terminal source coding inner bound (i.e. the
Berger-Tung problem) are examined in this paper. Furthermore, a non-asymptotic
analysis is used for the point-to-point case to examine the upper bound on the
excess distortion provided by this method. The likelihood encoder is also
related to a recent alternative technique using properties of random binning
The Likelihood Encoder for Lossy Source Compression
In this work, a likelihood encoder is studied in the context of lossy source
compression. The analysis of the likelihood encoder is based on a soft-covering
lemma. It is demonstrated that the use of a likelihood encoder together with
the soft-covering lemma gives alternative achievability proofs for classical
source coding problems. The case of the rate-distortion function with side
information at the decoder (i.e. the Wyner-Ziv problem) is carefully examined
and an application of the likelihood encoder to the multi-terminal source
coding inner bound (i.e. the Berger-Tung region) is outlined.Comment: 5 pages, 2 figures, ISIT 201
Lossy Compression with Near-uniform Encoder Outputs
It is well known that lossless compression of a discrete memoryless source
with near-uniform encoder output is possible at a rate above its entropy if and
only if the encoder is randomized. This work focuses on deriving conditions for
near-uniform encoder output(s) in the Wyner-Ziv and the distributed lossy
compression problems. We show that in the Wyner-Ziv problem, near-uniform
encoder output and operation close to the WZ-rate limit is simultaneously
possible, whereas in the distributed lossy compression problem, jointly
near-uniform outputs is achievable in the interior of the distributed lossy
compression rate region if the sources share non-trivial G\'{a}cs-K\"{o}rner
common information.Comment: Submitted to the 2016 IEEE International Symposium on Information
Theory (11 Pages, 3 Figures
A Rate-Distortion Based Secrecy System with Side Information at the Decoders
A secrecy system with side information at the decoders is studied in the
context of lossy source compression over a noiseless broadcast channel. The
decoders have access to different side information sequences that are
correlated with the source. The fidelity of the communication to the legitimate
receiver is measured by a distortion metric, as is traditionally done in the
Wyner-Ziv problem. The secrecy performance of the system is also evaluated
under a distortion metric. An achievable rate-distortion region is derived for
the general case of arbitrarily correlated side information. Exact bounds are
obtained for several special cases in which the side information satisfies
certain constraints. An example is considered in which the side information
sequences come from a binary erasure channel and a binary symmetric channel.Comment: 8 pages. Allerton 201
Generative Compression
Traditional image and video compression algorithms rely on hand-crafted
encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the
data being compressed. Here we describe the concept of generative compression,
the compression of data using generative models, and suggest that it is a
direction worth pursuing to produce more accurate and visually pleasing
reconstructions at much deeper compression levels for both image and video
data. We also demonstrate that generative compression is orders-of-magnitude
more resilient to bit error rates (e.g. from noisy wireless channels) than
traditional variable-length coding schemes
- …