1,083 research outputs found

    Missing gg-mass: Investigating the Missing Parts of Distributions

    Full text link
    Estimating the underlying distribution from \textit{iid} samples is a classical and important problem in statistics. When the alphabet size is large compared to number of samples, a portion of the distribution is highly likely to be unobserved or sparsely observed. The missing mass, defined as the sum of probabilities Pr(x)\text{Pr}(x) over the missing letters xx, and the Good-Turing estimator for missing mass have been important tools in large-alphabet distribution estimation. In this article, given a positive function gg from [0,1][0,1] to the reals, the missing gg-mass, defined as the sum of g(Pr(x))g(\text{Pr}(x)) over the missing letters xx, is introduced and studied. The missing gg-mass can be used to investigate the structure of the missing part of the distribution. Specific applications for special cases such as order-α\alpha missing mass (g(p)=pαg(p)=p^{\alpha}) and the missing Shannon entropy (g(p)=plogpg(p)=-p\log p) include estimating distance from uniformity of the missing distribution and its partial estimation. Minimax estimation is studied for order-α\alpha missing mass for integer values of α\alpha and exact minimax convergence rates are obtained. Concentration is studied for a class of functions gg and specific results are derived for order-α\alpha missing mass and missing Shannon entropy. Sub-Gaussian tail bounds with near-optimal worst-case variance factors are derived. Two new notions of concentration, named strongly sub-Gamma and filtered sub-Gaussian concentration, are introduced and shown to result in right tail bounds that are better than those obtained from sub-Gaussian concentration

    Selective machine learning of doubly robust functionals

    Full text link
    While model selection is a well-studied topic in parametric and nonparametric regression or density estimation, selection of possibly high-dimensional nuisance parameters in semiparametric problems is far less developed. In this paper, we propose a selective machine learning framework for making inferences about a finite-dimensional functional defined on a semiparametric model, when the latter admits a doubly robust estimating function and several candidate machine learning algorithms are available for estimating the nuisance parameters. We introduce two new selection criteria for bias reduction in estimating the functional of interest, each based on a novel definition of pseudo-risk for the functional that embodies the double robustness property and thus is used to select the pair of learners that is nearest to fulfilling this property. We establish an oracle property for a multi-fold cross-validation version of the new selection criteria which states that our empirical criteria perform nearly as well as an oracle with a priori knowledge of the pseudo-risk for each pair of candidate learners. We also describe a smooth approximation to the selection criteria which allows for valid post-selection inference. Finally, we apply the approach to model selection of a semiparametric estimator of average treatment effect given an ensemble of candidate machine learners to account for confounding in an observational study

    Optimal estimation of high-order missing masses, and the rare-type match problem

    Full text link
    Consider a random sample (X1,,Xn)(X_{1},\ldots,X_{n}) from an unknown discrete distribution P=j1pjδsjP=\sum_{j\geq1}p_{j}\delta_{s_{j}} on a countable alphabet S\mathbb{S}, and let (Yn,j)j1(Y_{n,j})_{j\geq1} be the empirical frequencies of distinct symbols sjs_{j}'s in the sample. We consider the problem of estimating the rr-order missing mass, which is a discrete functional of PP defined as θr(P;Xn)=j1pjrI(Yn,j=0).\theta_{r}(P;\mathbf{X}_{n})=\sum_{j\geq1}p^{r}_{j}I(Y_{n,j}=0). This is generalization of the missing mass whose estimation is a classical problem in statistics, being the subject of numerous studies both in theory and methods. First, we introduce a nonparametric estimator of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}) and a corresponding non-asymptotic confidence interval through concentration properties of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). Then, we investigate minimax estimation of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}), which is the main contribution of our work. We show that minimax estimation is not feasible over the class of all discrete distributions on S\mathbb{S}, and not even for distributions with regularly varying tails, which only guarantee that our estimator is consistent for θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). This leads to introduce the stronger assumption of second-order regular variation for the tail behaviour of PP, which is proved to be sufficient for minimax estimation of θr(P;Xn)\theta_r(P;\mathbf{X}_{n}), making the proposed estimator an optimal minimax estimator of θr(P;Xn)\theta_{r}(P;\mathbf{X}_{n}). Our interest in the rr-order missing mass arises from forensic statistics, where the estimation of the 22-order missing mass appears in connection to the estimation of the likelihood ratio T(P,Xn)=θ1(P;Xn)/θ2(P;Xn)T(P,\mathbf{X}_{n})=\theta_{1}(P;\mathbf{X}_{n})/\theta_{2}(P;\mathbf{X}_{n}), known as the "fundamental problem of forensic mathematics". We present theoretical guarantees to nonparametric estimation of T(P,Xn)T(P,\mathbf{X}_{n})
    corecore