448 research outputs found

    Missing gg-mass: Investigating the Missing Parts of Distributions

    Full text link
    Estimating the underlying distribution from \textit{iid} samples is a classical and important problem in statistics. When the alphabet size is large compared to number of samples, a portion of the distribution is highly likely to be unobserved or sparsely observed. The missing mass, defined as the sum of probabilities Pr(x)\text{Pr}(x) over the missing letters xx, and the Good-Turing estimator for missing mass have been important tools in large-alphabet distribution estimation. In this article, given a positive function gg from [0,1][0,1] to the reals, the missing gg-mass, defined as the sum of g(Pr(x))g(\text{Pr}(x)) over the missing letters xx, is introduced and studied. The missing gg-mass can be used to investigate the structure of the missing part of the distribution. Specific applications for special cases such as order-α\alpha missing mass (g(p)=pαg(p)=p^{\alpha}) and the missing Shannon entropy (g(p)=plogpg(p)=-p\log p) include estimating distance from uniformity of the missing distribution and its partial estimation. Minimax estimation is studied for order-α\alpha missing mass for integer values of α\alpha and exact minimax convergence rates are obtained. Concentration is studied for a class of functions gg and specific results are derived for order-α\alpha missing mass and missing Shannon entropy. Sub-Gaussian tail bounds with near-optimal worst-case variance factors are derived. Two new notions of concentration, named strongly sub-Gamma and filtered sub-Gaussian concentration, are introduced and shown to result in right tail bounds that are better than those obtained from sub-Gaussian concentration

    Optimal estimation of high-order missing masses, and the rare-type match problem

    Full text link
    Consider a random sample (X1,,Xn)(X_{1},\ldots,X_{n}) from an unknown discrete distribution P=j1pjδsjP=\sum_{j\geq1}p_{j}\delta_{s_{j}} on a countable alphabet S\mathbb{S}, and let (Yn,j)j1(Y_{n,j})_{j\geq1} be the empirical frequencies of distinct symbols sjs_{j}'s in the sample. We consider the problem of estimating the rr-order missing mass, which is a discrete functional of PP defined as θr(P;Xn)=j1pjrI(Yn,j=0).\theta_{r}(P;\mathbf{X}_{n})=\sum_{j\geq1}p^{r}_{j}I(Y_{n,j}=0). This is generalization of the missing mass whose estimation is a classical problem in statistics, being the subject of numerous studies both in theory and methods. First, we introduce a nonparametric estimator of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}) and a corresponding non-asymptotic confidence interval through concentration properties of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). Then, we investigate minimax estimation of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}), which is the main contribution of our work. We show that minimax estimation is not feasible over the class of all discrete distributions on S\mathbb{S}, and not even for distributions with regularly varying tails, which only guarantee that our estimator is consistent for θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). This leads to introduce the stronger assumption of second-order regular variation for the tail behaviour of PP, which is proved to be sufficient for minimax estimation of θr(P;Xn)\theta_r(P;\mathbf{X}_{n}), making the proposed estimator an optimal minimax estimator of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). Our interest in the rr-order missing mass arises from forensic statistics, where the estimation of the 22-order missing mass appears in connection to the estimation of the likelihood ratio T(P,Xn)=θ1(P;Xn)/θ2(P;Xn)T(P,\mathbf{X}_{n})=\theta_{1}(P;\mathbf{X}_{n})/\theta_{2}(P;\mathbf{X}_{n}), known as the "fundamental problem of forensic mathematics". We present theoretical guarantees to nonparametric estimation of T(P,Xn)T(P,\mathbf{X}_{n})

    Tilastollisia ja informaatioteoreettisia data-analyysimenetelmiä

    Get PDF
    In this Thesis, we develop theory and methods for computational data analysis. The problems in data analysis are approached from three perspectives: statistical learning theory, the Bayesian framework, and the information-theoretic minimum description length (MDL) principle. Contributions in statistical learning theory address the possibility of generalization to unseen cases, and regression analysis with partially observed data with an application to mobile device positioning. In the second part of the Thesis, we discuss so called Bayesian network classifiers, and show that they are closely related to logistic regression models. In the final part, we apply the MDL principle to tracing the history of old manuscripts, and to noise reduction in digital signals."Data on esitys, jolla ei itsessään ole merkitystä. Kun dataa käsitellään ja sille annetaan merkitys, siitä voi syntyä informaatiota ja lopulta tietoa." [Wikipedia]. Datan muuntaminen informaatioksi on data-analyysia. Tähän sisältyvät datasta oppiminen ja siihen pohjautuvien päätelmien teko. Nykyaikaisessa data-analyysissa keskeisimpiin tieteenaloihin kuuluu tietojenkäsittelytiede, jonka roolina on tehokkaiden tietokoneessa suoritettavien sääntöjen ja algoritmien kehittäminen. Data-analyysissa tarvitaan myös muiden tieteenalojen osaamista, esimerkkeinä matematiikka, tilastotiede, tieteenfilosofia ja monet sovelletut tieteenalat kuten insinööritiede ja bioinformatiikka. Analyysin kohteena oleva data voi olla vaikkapa mittaustuloksia, kirjoitettua tekstiä tai kuvia --- näitä kaikkia datan olomuotoja esiintyy väitöskirjassa, jonka nimi on suomeksi "Tilastollisia ja informaatioteoreettisia data-analyysimenetelmiä". Väitöskirjassa data-analyysin ongelmia lähestytään kolmesta näkökulmasta, jotka ovat tilastollisen oppimisen teoria (engl. statistical learning theory), Bayes-menetelmät sekä informaatioteoreettinen lyhimmän kuvauspituuden periaate (engl. minimum description length (MDL) principle). Tilastollisen oppimisen teorian puitteissa käsitellään mahdollisuutta tehdä induktiivisia (yleistäviä) päätelmiä, jotka koskevat toistaiseksi kokonaan havaitsemattomia tapauksia, sekä lineaarisen mallin oppimista vain osittain havaitusta datasta. Jälkimmäinen tutkimus mahdollistaa tehokkaan radioaaltojen etenemisen mallintamisen, mikä puolestaan helpottaa mm. mobiililaitteiden paikannusta. Väitöskirjan toisessa osassa osoitetaan läheinen yhteys ns. Bayes-verkkoluokittelijoiden ja logistisen regression välillä. Näiden kahden parhaita puolia yhdistelemällä johdetaan uusi tehokkaiden luokittelualgoritmien perhe, jonka välityksellä voidaan saavuttaa tasapaino luokittelijan monimutkaisuuden ja oppimisnopeuden välillä. Väitöskirjan viimeisessä osassa sovelletaan MDL-periaatetta kahteen erityyppiseen ongelmaan. Ensimmäisenä ongelmana pyritään rekonstruoimaan useina erilaisina kappaleina esiintyvän tekstin syntyhistoria. Aineistona on käytetty Pyhän Henrikin latinankielisen pyhimyslegendan n. 50 erilaista tekstiversiota. Tuloksena saatava tekstiversioiden "sukupuu" tarjoaa kiinnostavaa tietoa Suomen ja Pohjoismaiden keskiajan historiasta. Toisena ongelmana tutkitaan digitaalisten signaalien, kuten digikuvien, laadun parantamista kohinaa vähentämällä. Mahdollisuus käyttää alunperin huonolaatuista signaalia on hyödyllinen mm. lääketieteellisissä kuvantamissovelluksissa

    Population size estimation via alternative parametrizations for Poisson mixture models

    Get PDF
    We exploit a suitable moment-based reparametrization of the Poisson mixtures distributions for developing classical and Bayesian inference for the unknown size of a finite population in the presence of count data. Here we put particular emphasis on suitable mappings between ordinary moments and recurrence coefficients that will allow us to implement standard maximization routines and MCMC routines in a more convenient parameter space. We assess the comparative performance of our approach in real data applications and in a simulation study

    Polynomial methods in statistical inference: Theory and practice

    Get PDF
    Recent advances in genetics, computer vision, and text mining are accompanied by analyzing data coming from a large domain, where the domain size is comparable or larger than the number of samples. In this dissertation, we apply the polynomial methods to several statistical questions with rich history and wide applications. The goal is to understand the fundamental limits of the problems in the large domain regime, and to design sample optimal and time efficient algorithms with provable guarantees. The first part investigates the problem of property estimation. Consider the problem of estimating the Shannon entropy of a distribution over kk elements from nn independent samples. We obtain the minimax mean-square error within universal multiplicative constant factors if nn exceeds a constant factor of k/log(k)k/\log(k); otherwise there exists no consistent estimator. This refines the recent result on the minimal sample size for consistent entropy estimation. The apparatus of best polynomial approximation plays a key role in both the construction of optimal estimators and, via a duality argument, the minimax lower bound. We also consider the problem of estimating the support size of a discrete distribution whose minimum non-zero mass is at least 1k \frac{1}{k}. Under the independent sampling model, we show that the sample complexity, i.e., the minimal sample size to achieve an additive error of ϵk\epsilon k with probability at least 0.1 is within universal constant factors of klogklog21ϵ \frac{k}{\log k}\log^2\frac{1}{\epsilon} , which improves the state-of-the-art result of kϵ2logk \frac{k}{\epsilon^2 \log k} . Similar characterization of the minimax risk is also obtained. Our procedure is a linear estimator based on the Chebyshev polynomial and its approximation-theoretic properties, which can be evaluated in O(n+log2k)O(n+\log^2 k) time and attains the sample complexity within constant factors. The superiority of the proposed estimator in terms of accuracy, computational efficiency and scalability is demonstrated in a variety of synthetic and real datasets. When the distribution is supported on a discrete set, estimating the support size is also known as the distinct elements problem, where the goal is to estimate the number of distinct colors in an urn containing k k balls based on nn samples drawn with replacements. Based on discrete polynomial approximation and interpolation, we propose an estimator with additive error guarantee that achieves the optimal sample complexity within O(loglogk)O(\log\log k) factors, and in fact within constant factors for most cases. The estimator can be computed in O(n)O(n) time for an accurate estimation. The result also applies to sampling without replacement provided the sample size is a vanishing fraction of the urn size. One of the key auxiliary results is a sharp bound on the minimum singular values of a real rectangular Vandermonde matrix, which might be of independent interest. The second part studies the problem of learning Gaussian mixtures. The method of moments is one of the most widely used methods in statistics for parameter estimation, by means of solving the system of equations that match the population and estimated moments. However, in practice and especially for the important case of mixture models, one frequently needs to contend with the difficulties of non-existence or non-uniqueness of statistically meaningful solutions, as well as the high computational cost of solving large polynomial systems. Moreover, theoretical analysis of the method of moments are mainly confined to asymptotic normality style of results established under strong assumptions. We consider estimating a kk-component Gaussian location mixture with a common (possibly unknown) variance parameter. To overcome the aforementioned theoretic and algorithmic hurdles, a crucial step is to denoise the moment estimates by projecting to the truncated moment space (via semidefinite programming) before solving the method of moments equations. Not only does this regularization ensures existence and uniqueness of solutions, it also yields fast solvers by means of Gauss quadrature. Furthermore, by proving new moment comparison theorems in the Wasserstein distance via polynomial interpolation and majorization techniques, we establish the statistical guarantees and adaptive optimality of the proposed procedure, as well as oracle inequality in misspecified models. These results can also be viewed as provable algorithms for generalized method of moments which involves non-convex optimization and lacks theoretical guarantees

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B
    corecore