9 research outputs found

    Rare Probability Estimation under Regularly Varying Heavy Tails

    Get PDF
    This paper studies the problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown, possibly infinite, discrete distribution. In particular, we study the multiplicative consistency of estimators, defined as the ratio of the estimate to the true quantity converging to one. We first show that the classical Good-Turing estimator is not universally consistent in this sense, despite enjoying favorable additive properties. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly and have the absolute discounting structure of many heuristic algorithms. This also establishes a discrete parallel to extreme value theory, and many of the techniques therein can be adapted to the framework that we set forth.National Science Foundation (U.S.) (Grant 6922470)United States. Office of Naval Research (Grant 6918937

    On inference about rare events

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 75-77).Despite the increasing volume of data in modern statistical applications, critical patterns and events have often little, if any, representation. This is not unreasonable, given that such variables are critical precisely because they are rare. We then have to raise the natural question: when can we infer something meaningful in such contexts? The focal point of this thesis is the archetypal problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown discrete distribution. Our first contribution is to show that the classical Good-Turing estimator that is used in this problem has performance guarantees that are asymptotically non-trivial only in a heavy-tail setting. This explains the success of this method in natural language modeling, where one often has Zipf law behavior. We then study the strong consistency of estimators, in the sense of ratios converging to one. We first show that the Good-Turing estimator is not universally consistent. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being strongly consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly. This framework is a close parallel to extreme value theory, and many of the techniques therein can be adopted into the model set forth in this thesis. Lastly, we consider a different model that captures situations of data scarcity and large alphabets, and which was recently suggested by Wagner, Viswanath and Kulkarni. In their rare-events regime, one scales the finite support of the distribution with the number of samples, in a manner akin to high-dimensional statistics. In that context, we propose an approach that allows us to easily establish consistent estimators for a large class of canonical estimation problems. These include estimating entropy, the size of the alphabet, and the range of the probabilities.by Mesrob I. Ohannessian.Ph.D

    Missing gg-mass: Investigating the Missing Parts of Distributions

    Full text link
    Estimating the underlying distribution from \textit{iid} samples is a classical and important problem in statistics. When the alphabet size is large compared to number of samples, a portion of the distribution is highly likely to be unobserved or sparsely observed. The missing mass, defined as the sum of probabilities Pr(x)\text{Pr}(x) over the missing letters xx, and the Good-Turing estimator for missing mass have been important tools in large-alphabet distribution estimation. In this article, given a positive function gg from [0,1][0,1] to the reals, the missing gg-mass, defined as the sum of g(Pr(x))g(\text{Pr}(x)) over the missing letters xx, is introduced and studied. The missing gg-mass can be used to investigate the structure of the missing part of the distribution. Specific applications for special cases such as order-α\alpha missing mass (g(p)=pαg(p)=p^{\alpha}) and the missing Shannon entropy (g(p)=plogpg(p)=-p\log p) include estimating distance from uniformity of the missing distribution and its partial estimation. Minimax estimation is studied for order-α\alpha missing mass for integer values of α\alpha and exact minimax convergence rates are obtained. Concentration is studied for a class of functions gg and specific results are derived for order-α\alpha missing mass and missing Shannon entropy. Sub-Gaussian tail bounds with near-optimal worst-case variance factors are derived. Two new notions of concentration, named strongly sub-Gamma and filtered sub-Gaussian concentration, are introduced and shown to result in right tail bounds that are better than those obtained from sub-Gaussian concentration

    Strong Consistency of the Good-Turing Estimator

    No full text
    corecore