4,299 research outputs found

    On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method

    Full text link
    This paper considers the entropy of the sum of (possibly dependent and non-identically distributed) Bernoulli random variables. Upper bounds on the error that follows from an approximation of this entropy by the entropy of a Poisson random variable with the same mean are derived. The derivation of these bounds combines elements of information theory with the Chen-Stein method for Poisson approximation. The resulting bounds are easy to compute, and their applicability is exemplified. This conference paper presents in part the first half of the paper entitled "An information-theoretic perspective of the Poisson approximation via the Chen-Stein method" (see:arxiv:1206.6811). A generalization of the bounds that considers the accuracy of the Poisson approximation for the entropy of a sum of non-negative, integer-valued and bounded random variables is introduced in the full paper. It also derives lower bounds on the total variation distance, relative entropy and other measures that are not considered in this conference paper.Comment: A conference paper of 5 pages that appears in the Proceedings of the 2012 IEEE International Workshop on Information Theory (ITW 2012), pp. 542--546, Lausanne, Switzerland, September 201

    Exponentiated Extended Weibull-Power Series Class of Distributions

    Get PDF
    In this paper, we introduce a new class of distributions by compounding the exponentiated extended Weibull family and power series family. This distribution contains several lifetime models such as the complementary extended Weibull-power series, generalized exponential-power series, generalized linear failure rate-power series, exponentiated Weibull-power series, generalized modified Weibull-power series, generalized Gompertz-power series and exponentiated extended Weibull distributions as special cases. We obtain several properties of this new class of distributions such as Shannon entropy, mean residual life, hazard rate function, quantiles and moments. The maximum likelihood estimation procedure via a EM-algorithm is presented.Comment: Accepted for publication Ciencia e Natura Journa

    Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs

    Full text link
    Let P={p(i)}P = \{p(i)\} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial PP for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective functions, β\beta-exponential means, those of the form logaip(i)an(i)\log_a \sum_i p(i) a^{n(i)}, where n(i)n(i) is the length of the iith codeword and aa is a positive constant. Applications of such minimizations include a novel problem of maximizing the chance of message receipt in single-shot communications (a<1a<1) and a previously known problem of minimizing the chance of buffer overflow in a queueing system (a>1a>1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions and both are extended to alphabetic codes, as well as to minimizing maximum pointwise redundancy. The aforementioned application of minimizing the chance of buffer overflow is also considered.Comment: 14 pages, 6 figures, accepted to IEEE Trans. Inform. Theor

    A simple derivation and classification of common probability distributions based on information symmetry and measurement scale

    Full text link
    Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale. Our framework relates the different continuous probability distributions through the variations in measurement scale that change each family of maximum entropy distributions into a distinct family.Comment: 17 pages, 0 figure
    corecore