8 research outputs found

    Large alphabets: Finite, infinite, and scaling models

    Get PDF
    How can we effectively model situations with large alphabets? On a pragmatic level, any engineered system, be it for inference, communication, or encryption, requires working with a finite number of symbols. Therefore, the most straight-forward model is a finite alphabet. However, to emphasize the disproportionate size of the alphabet, one may want to compare its finite size with the length of data at hand. More generally, this gives rise to scaling models that strive to capture regimes of operation where one anticipates such imbalance. Large alphabets may also be idealized as infinite. The caveat then is that such generality strips away many of the convenient machinery of finite settings. However, some of it may be salvaged by refocusing the tasks of interest, such as by moving from sequence to pattern compression, or by minimally restricting the classes of infinite models, such as via tail properties. In this paper we present an overview of models for large alphabets, some recent results, and possible directions in this area

    Generalized Error Exponents For Small Sample Universal Hypothesis Testing

    Full text link
    The small sample universal hypothesis testing problem is investigated in this paper, in which the number of samples nn is smaller than the number of possible outcomes mm. The goal of this work is to find an appropriate criterion to analyze statistical tests in this setting. A suitable model for analysis is the high-dimensional model in which both nn and mm increase to infinity, and n=o(m)n=o(m). A new performance criterion based on large deviations analysis is proposed and it generalizes the classical error exponent applicable for large sample problems (in which m=O(n)m=O(n)). This generalized error exponent criterion provides insights that are not available from asymptotic consistency or central limit theorem analysis. The following results are established for the uniform null distribution: (i) The best achievable probability of error PeP_e decays as Pe=exp{(n2/m)J(1+o(1))}P_e=\exp\{-(n^2/m) J (1+o(1))\} for some J>0J>0. (ii) A class of tests based on separable statistics, including the coincidence-based test, attains the optimal generalized error exponents. (iii) Pearson's chi-square test has a zero generalized error exponent and thus its probability of error is asymptotically larger than the optimal test.Comment: 43 pages, 4 figure

    On inference about rare events

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 75-77).Despite the increasing volume of data in modern statistical applications, critical patterns and events have often little, if any, representation. This is not unreasonable, given that such variables are critical precisely because they are rare. We then have to raise the natural question: when can we infer something meaningful in such contexts? The focal point of this thesis is the archetypal problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown discrete distribution. Our first contribution is to show that the classical Good-Turing estimator that is used in this problem has performance guarantees that are asymptotically non-trivial only in a heavy-tail setting. This explains the success of this method in natural language modeling, where one often has Zipf law behavior. We then study the strong consistency of estimators, in the sense of ratios converging to one. We first show that the Good-Turing estimator is not universally consistent. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being strongly consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly. This framework is a close parallel to extreme value theory, and many of the techniques therein can be adopted into the model set forth in this thesis. Lastly, we consider a different model that captures situations of data scarcity and large alphabets, and which was recently suggested by Wagner, Viswanath and Kulkarni. In their rare-events regime, one scales the finite support of the distribution with the number of samples, in a manner akin to high-dimensional statistics. In that context, we propose an approach that allows us to easily establish consistent estimators for a large class of canonical estimation problems. These include estimating entropy, the size of the alphabet, and the range of the probabilities.by Mesrob I. Ohannessian.Ph.D
    corecore