26,813 research outputs found
Block coding for stationary Gaussian sources with memory under a square-error fidelity criterion
In this paper, we present a new version of the source coding theorem for the block coding of stationary Gaussian sources with memory under a square-error distortion criterion. For both time-discrete and time-continuous Gaussian sources, the average square-error distortion of the optimum block source code of rate R > R(D) is shown to decrease at least exponentially in block-length to D, where R(D) is the square-error criterion rate distortion function of the stationary Gaussian source with memory. In both cases, the exponent of convergence of average distortion is explicitly derived
Computing the Rate-Distortion Function of Gray-Wyner System
In this paper, the rate-distortion theory of Gray-Wyner lossy source coding
system is investigated. An iterative algorithm is proposed to compute
rate-distortion function for general successive source. For the case of jointly
Gaussian distributed sources, the Lagrangian analysis of scalable source coding
in [1] is generalized to the Gray-Wyner instance. Upon the existing
single-letter characterization of the rate-distortion region, we compute and
determine an analytical expression of the rate-distortion function under
quadratic distortion constraints. According to the rate-distortion function,
another approach, different from Viswanatha et al. used, is provided to compute
Wyner's Common Information. The convergence of proposed iterative algorithm, RD
function with different parameters and the projection plane of RD region are
also shown via numerical simulations at last.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Second-Order Coding Rates for Conditional Rate-Distortion
This paper characterizes the second-order coding rates for lossy source
coding with side information available at both the encoder and the decoder. We
first provide non-asymptotic bounds for this problem and then specialize the
non-asymptotic bounds for three different scenarios: discrete memoryless
sources, Gaussian sources, and Markov sources. We obtain the second-order
coding rates for these settings. It is interesting to observe that the
second-order coding rate for Gaussian source coding with Gaussian side
information available at both the encoder and the decoder is the same as that
for Gaussian source coding without side information. Furthermore, regardless of
the variance of the side information, the dispersion is nats squared per
source symbol.Comment: 20 pages, 2 figures, second-order coding rates, finite blocklength,
network information theor
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
- …