22,725 research outputs found
Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding
We study a new class of codes for lossy compression with the squared-error
distortion criterion, designed using the statistical framework of
high-dimensional linear regression. Codewords are linear combinations of
subsets of columns of a design matrix. Called a Sparse Superposition or Sparse
Regression codebook, this structure is motivated by an analogous construction
proposed recently by Barron and Joseph for communication over an AWGN channel.
For i.i.d Gaussian sources and minimum-distance encoding, we show that such a
code can attain the Shannon rate-distortion function with the optimal error
exponent, for all distortions below a specified value. It is also shown that
sparse regression codes are robust in the following sense: a codebook designed
to compress an i.i.d Gaussian source of variance with
(squared-error) distortion can compress any ergodic source of variance less
than to within distortion . Thus the sparse regression ensemble
retains many of the good covering properties of the i.i.d random Gaussian
ensemble, while having having a compact representation in terms of a matrix
whose size is a low-order polynomial in the block-length.Comment: This version corrects a typo in the statement of Theorem 2 of the
published pape
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
Minimum Distortion Variance Concatenated Block Codes for Embedded Source Transmission
Some state-of-art multimedia source encoders produce embedded source bit
streams that upon the reliable reception of only a fraction of the total bit
stream, the decoder is able reconstruct the source up to a basic quality.
Reliable reception of later source bits gradually improve the reconstruction
quality. Examples include scalable extensions of H.264/AVC and progressive
image coders such as JPEG2000. To provide an efficient protection for embedded
source bit streams, a concatenated block coding scheme using a minimum mean
distortion criterion was considered in the past. Although, the original design
was shown to achieve better mean distortion characteristics than previous
studies, the proposed coding structure was leading to dramatic quality
fluctuations. In this paper, a modification of the original design is first
presented and then the second order statistics of the distortion is taken into
account in the optimization. More specifically, an extension scheme is proposed
using a minimum distortion variance optimization criterion. This robust system
design is tested for an image transmission scenario. Numerical results show
that the proposed extension achieves significantly lower variance than the
original design, while showing similar mean distortion performance using both
convolutional codes and low density parity check codes.Comment: 6 pages, 4 figures, In Proc. of International Conference on
Computing, Networking and Communications, ICNC 2014, Hawaii, US
Variable dimension weighted universal vector quantization and noiseless coding
A new algorithm for variable dimension weighted universal coding is introduced. Combining the multi-codebook system of weighted universal vector quantization (WUVQ), the partitioning technique of variable dimension vector quantization, and the optimal design strategy common to both, variable dimension WUVQ allows mixture sources to be effectively carved into their component subsources, each of which can then be encoded with the codebook best matched to that source. Application of variable dimension WUVQ to a sequence of medical images provides up to 4.8 dB improvement in signal to quantization noise ratio over WUVQ and up to 11 dB improvement over a standard full-search vector quantizer followed by an entropy code. The optimal partitioning technique can likewise be applied with a collection of noiseless codes, as found in weighted universal noiseless coding (WUNC). The resulting algorithm for variable dimension WUNC is also described
- …