1,401 research outputs found
Source Coding for Quasiarithmetic Penalties
Huffman coding finds a prefix code that minimizes mean codeword length for a
given probability distribution over a finite number of items. Campbell
generalized the Huffman problem to a family of problems in which the goal is to
minimize not mean codeword length but rather a generalized mean known as a
quasiarithmetic or quasilinear mean. Such generalized means have a number of
diverse applications, including applications in queueing. Several
quasiarithmetic-mean problems have novel simple redundancy bounds in terms of a
generalized entropy. A related property involves the existence of optimal
codes: For ``well-behaved'' cost functions, optimal codes always exist for
(possibly infinite-alphabet) sources having finite generalized entropy. Solving
finite instances of such problems is done by generalizing an algorithm for
finding length-limited binary codes to a new algorithm for finding optimal
binary codes for any quasiarithmetic mean with a convex cost function. This
algorithm can be performed using quadratic time and linear space, and can be
extended to other penalty functions, some of which are solvable with similar
space and time complexity, and others of which are solvable with slightly
greater complexity. This reduces the computational complexity of a problem
involving minimum delay in a queue, allows combinations of previously
considered problems to be optimized, and greatly expands the space of problems
solvable in quadratic time and linear space. The algorithm can be extended for
purposes such as breaking ties among possibly different optimal codes, as with
bottom-merge Huffman coding.Comment: 22 pages, 3 figures, submitted to IEEE Trans. Inform. Theory, revised
per suggestions of reader
Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs
Let be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, -exponential means,
those of the form , where is the length of
the th codeword and is a positive constant. Applications of such
minimizations include a novel problem of maximizing the chance of message
receipt in single-shot communications () and a previously known problem of
minimizing the chance of buffer overflow in a queueing system (). This
paper introduces methods for finding codes optimal for such exponential means.
One method applies to geometric distributions, while another applies to
distributions with lighter tails. The latter algorithm is applied to Poisson
distributions and both are extended to alphabetic codes, as well as to
minimizing maximum pointwise redundancy. The aforementioned application of
minimizing the chance of buffer overflow is also considered.Comment: 14 pages, 6 figures, accepted to IEEE Trans. Inform. Theor
Prefix Codes for Power Laws with Countable Support
In prefix coding over an infinite alphabet, methods that consider specific
distributions generally consider those that decline more quickly than a power
law (e.g., Golomb coding). Particular power-law distributions, however, model
many random variables encountered in practice. For such random variables,
compression performance is judged via estimates of expected bits per input
symbol. This correspondence introduces a family of prefix codes with an eye
towards near-optimal coding of known distributions. Compression performance is
precisely estimated for well-known probability distributions using these codes
and using previously known prefix codes. One application of these near-optimal
codes is an improved representation of rational numbers.Comment: 5 pages, 2 tables, submitted to Transactions on Information Theor
- β¦