1,461 research outputs found
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
On Match Lengths, Zero Entropy and Large Deviations - with Application to Sliding Window Lempel-Ziv Algorithm
The Sliding Window Lempel-Ziv (SWLZ) algorithm that makes use of recurrence
times and match lengths has been studied from various perspectives in
information theory literature. In this paper, we undertake a finer study of
these quantities under two different scenarios, i) \emph{zero entropy} sources
that are characterized by strong long-term memory, and ii) the processes with
weak memory as described through various mixing conditions.
For zero entropy sources, a general statement on match length is obtained. It
is used in the proof of almost sure optimality of Fixed Shift Variant of
Lempel-Ziv (FSLZ) and SWLZ algorithms given in literature. Through an example
of stationary and ergodic processes generated by an irrational rotation we
establish that for a window of size , a compression ratio given by
where depends on and approaches 1 as
, is obtained under the application of FSLZ and SWLZ
algorithms. Also, we give a general expression for the compression ratio for a
class of stationary and ergodic processes with zero entropy.
Next, we extend the study of Ornstein and Weiss on the asymptotic behavior of
the \emph{normalized} version of recurrence times and establish the \emph{large
deviation property} (LDP) for a class of mixing processes. Also, an estimator
of entropy based on recurrence times is proposed for which large deviation
principle is proved for sources satisfying similar mixing conditions.Comment: accepted to appear in IEEE Transactions on Information Theor
Lossy compression of discrete sources via Viterbi algorithm
We present a new lossy compressor for discrete-valued sources. For coding a
sequence , the encoder starts by assigning a certain cost to each possible
reconstruction sequence. It then finds the one that minimizes this cost and
describes it losslessly to the decoder via a universal lossless compressor. The
cost of each sequence is a linear combination of its distance from the sequence
and a linear function of its order empirical distribution.
The structure of the cost function allows the encoder to employ the Viterbi
algorithm to recover the minimizer of the cost. We identify a choice of the
coefficients comprising the linear function of the empirical distribution used
in the cost function which ensures that the algorithm universally achieves the
optimum rate-distortion performance of any stationary ergodic source in the
limit of large , provided that diverges as . Iterative
techniques for approximating the coefficients, which alleviate the
computational burden of finding the optimal coefficients, are proposed and
studied.Comment: 26 pages, 6 figures, Submitted to IEEE Transactions on Information
Theor
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
Empirical processes, typical sequences and coordinated actions in standard Borel spaces
This paper proposes a new notion of typical sequences on a wide class of
abstract alphabets (so-called standard Borel spaces), which is based on
approximations of memoryless sources by empirical distributions uniformly over
a class of measurable "test functions." In the finite-alphabet case, we can
take all uniformly bounded functions and recover the usual notion of strong
typicality (or typicality under the total variation distance). For a general
alphabet, however, this function class turns out to be too large, and must be
restricted. With this in mind, we define typicality with respect to any
Glivenko-Cantelli function class (i.e., a function class that admits a Uniform
Law of Large Numbers) and demonstrate its power by giving simple derivations of
the fundamental limits on the achievable rates in several source coding
scenarios, in which the relevant operational criteria pertain to reproducing
empirical averages of a general-alphabet stationary memoryless source with
respect to a suitable function class.Comment: 14 pages, 3 pdf figures; accepted to IEEE Transactions on Information
Theor
- …