5 research outputs found
Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs
Let be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, -exponential means,
those of the form , where is the length of
the th codeword and is a positive constant. Applications of such
minimizations include a novel problem of maximizing the chance of message
receipt in single-shot communications () and a previously known problem of
minimizing the chance of buffer overflow in a queueing system (). This
paper introduces methods for finding codes optimal for such exponential means.
One method applies to geometric distributions, while another applies to
distributions with lighter tails. The latter algorithm is applied to Poisson
distributions and both are extended to alphabetic codes, as well as to
minimizing maximum pointwise redundancy. The aforementioned application of
minimizing the chance of buffer overflow is also considered.Comment: 14 pages, 6 figures, accepted to IEEE Trans. Inform. Theor
On nonlinear compression costs: when Shannon meets R\'enyi
Shannon entropy is the shortest average codeword length a lossless compressor
can achieve by encoding i.i.d. symbols. However, there are cases in which the
objective is to minimize the \textit{exponential} average codeword length, i.e.
when the cost of encoding/decoding scales exponentially with the length of
codewords. The optimum is reached by all strategies that map each symbol
generated with probability into a codeword of length
. This leads to the
minimum exponential average codeword length, which equals the R\'enyi, rather
than Shannon, entropy of the source distribution. We generalize the established
Arithmetic Coding (AC) compressor to this framework. We analytically show that
our generalized algorithm provides an exponential average length which is
arbitrarily close to the R\'enyi entropy, if the symbols to encode are i.i.d..
We then apply our algorithm to both simulated (i.i.d. generated) and real (a
piece of Wikipedia text) datasets. While, as expected, we find that the
application to i.i.d. data confirms our analytical results, we also find that,
when applied to the real dataset (composed by highly correlated symbols), our
algorithm is still able to significantly reduce the exponential average
codeword length with respect to the classical `Shannonian' one. Moreover, we
provide another justification of the use of the exponential average: namely, we
show that by minimizing the exponential average length it is possible to
minimize the probability that codewords exceed a certain threshold length. This
relation relies on the connection between the exponential average and the
cumulant generating function of the source distribution, which is in turn
related to the probability of large deviations. We test and confirm our results
again on both simulated and real datasets.Comment: 22 pages, 9 figure