5 research outputs found
Stochastic Interpretation for the Arimoto Algorithm
The Arimoto algorithm computes the Gallager function for a given channel and parameter
, by means of alternating maximization. Along the way, it generates a
sequence of input distributions , , ... , that
converges to the maximizing input . We propose a stochastic
interpretation for the Arimoto algorithm. We show that for a random (i.i.d.)
codebook with a distribution , the next distribution
in the Arimoto algorithm is equal to the type () of the
feasible transmitted codeword that maximizes the conditional Gallager exponent
(conditioned on a specific transmitted codeword type ). This
interpretation is a first step toward finding a stochastic mechanism for
on-line channel input adaptation.Comment: 5 pages, 1 figure, accepted for 2015 IEEE Information Theory
Workshop, Jerusalem, Israe
Asymptotically Optimal Stochastic Lossy Coding of Markov Sources
An effective 'on-the-fly' mechanism for stochastic lossy coding of Markov
sources using string matching techniques is proposed in this paper. Earlier
work has shown that the rate-distortion bound can be asymptotically achieved by
a 'natural type selection' (NTS) mechanism which iteratively encodes
asymptotically long source strings (from an unknown source distribution P) and
regenerates the codebook according to a maximum likelihood distribution
framework, after observing a set of K codewords to 'd-match' (i.e., satisfy the
distortion constraint for) a respective set of K source words. This result was
later generalized for sources with memory under the assumption that the source
words must contain a sequence of asymptotic-length vectors (or super-symbols)
over the source super-alphabet, i.e., the source is considered a vector source.
However, the earlier result suffers from a significant practical flaw, more
specifically, it requires expanding the super-symbols (and correspondingly the
super-alphabet) lengths to infinity in order to achieve the rate-distortion
bound, even for finite memory sources, e.g., Markov sources. This implies that
the complexity of the NTS iteration will explode beyond any practical
capabilities, thus compromising the promise of the NTS algorithm in practical
scenarios for sources with memory. This work describes a considerably more
efficient and tractable mechanism to achieve asymptotically optimal performance
given a prescribed memory constraint, within a practical framework tailored to
Markov sources. More specifically, the algorithm finds asymptotically the
optimal codebook reproduction distribution, within a constrained set of
distributions having Markov property with a prescribed order, that achieves the
minimum per letter coding rate while maintaining a specified distortion level