274,319 research outputs found

    Optimal testing for properties of distributions

    Get PDF
    Given samples from an unknown discrete distribution p, is it possible to distinguish whether p belongs to some class of distributions C versus p being far from every distribution in C? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, as well as in information theory and theoretical computer science, where the emphasis has been on small sample size and computational complexity. Nevertheless, even for basic properties of discrete distributions such as monotonicity, independence, logconcavity, unimodality, and monotone-hazard rate, the optimal sample complexity is unknown. We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution p, and a known distribution q, are p and q close in x[superscript 2]-distance, or far in total variation distance? The optimality of our testers is established by providing matching lower bounds, up to constant factors. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave, monotone hazard rate distributions

    Testing Properties of Multiple Distributions with Few Samples

    Get PDF
    We propose a new setting for testing properties of distributions while receiving samples from several distributions, but few samples per distribution. Given samples from ss distributions, p1,p2,…,psp_1, p_2, \ldots, p_s, we design testers for the following problems: (1) Uniformity Testing: Testing whether all the pip_i's are uniform or ϵ\epsilon-far from being uniform in ℓ1\ell_1-distance (2) Identity Testing: Testing whether all the pip_i's are equal to an explicitly given distribution qq or ϵ\epsilon-far from qq in ℓ1\ell_1-distance, and (3) Closeness Testing: Testing whether all the pip_i's are equal to a distribution qq which we have sample access to, or ϵ\epsilon-far from qq in ℓ1\ell_1-distance. By assuming an additional natural condition about the source distributions, we provide sample optimal testers for all of these problems.Comment: ITCS 202

    Optimal Testing of Discrete Distributions with High Probability

    Get PDF
    We study the problem of testing discrete distributions with a focus on the high probability regime. Specifically, given samples from one or more discrete distributions, a property P\mathcal{P}, and parameters 0<ϵ,δ<10< \epsilon, \delta <1, we want to distinguish {\em with probability at least 1−δ1-\delta} whether these distributions satisfy P\mathcal{P} or are ϵ\epsilon-far from P\mathcal{P} in total variation distance. Most prior work in distribution testing studied the constant confidence case (corresponding to δ=Ω(1)\delta = \Omega(1)), and provided sample-optimal testers for a range of properties. While one can always boost the confidence probability of any such tester by black-box amplification, this generic boosting method typically leads to sub-optimal sample bounds. Here we study the following broad question: For a given property P\mathcal{P}, can we {\em characterize} the sample complexity of testing P\mathcal{P} as a function of all relevant problem parameters, including the error probability δ\delta? Prior to this work, uniformity testing was the only statistical task whose sample complexity had been characterized in this setting. As our main results, we provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors, as a function of all relevant parameters. We also show matching information-theoretic lower bounds on the sample complexity of these problems. Our techniques naturally extend to give optimal testers for related problems. To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples

    Optimal Testing of Discrete Distributions with High Probability

    Get PDF
    We study the problem of testing discrete distributions with a focus on the high probability regime. Specifically, given samples from one or more discrete distributions, a property P\mathcal{P}, and parameters 0<ϵ,δ<10< \epsilon, \delta <1, we want to distinguish {\em with probability at least 1−δ1-\delta} whether these distributions satisfy P\mathcal{P} or are ϵ\epsilon-far from P\mathcal{P} in total variation distance. Most prior work in distribution testing studied the constant confidence case (corresponding to δ=Ω(1)\delta = \Omega(1)), and provided sample-optimal testers for a range of properties. While one can always boost the confidence probability of any such tester by black-box amplification, this generic boosting method typically leads to sub-optimal sample bounds. Here we study the following broad question: For a given property P\mathcal{P}, can we {\em characterize} the sample complexity of testing P\mathcal{P} as a function of all relevant problem parameters, including the error probability δ\delta? Prior to this work, uniformity testing was the only statistical task whose sample complexity had been characterized in this setting. As our main results, we provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors, as a function of all relevant parameters. We also show matching information-theoretic lower bounds on the sample complexity of these problems. Our techniques naturally extend to give optimal testers for related problems. To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples

    Testing Shape Restrictions of Discrete Distributions

    Get PDF
    We study the question of testing structured properties (classes) of discrete distributions. Specifically, given sample access to an arbitrary distribution D over [n] and a property P, the goal is to distinguish between D in P and l_{1}(D,P)>epsilon. We develop a general algorithm for this question, which applies to a large range of "shape-constrained" properties, including monotone, log-concave, t-modal, piecewise-polynomial, and Poisson Binomial distributions. Moreover, for all cases considered, our algorithm has near-optimal sample complexity with regard to the domain size and is computationally efficient. For most of these classes, we provide the first non-trivial tester in the literature. In addition, we also describe a generic method to prove lower bounds for this problem, and use it to show our upper bounds are nearly tight. Finally, we extend some of our techniques to tolerant testing, deriving nearly-tight upper and lower bounds for the corresponding questions
    • …
    corecore