4 research outputs found

    Beta-trees: Multivariate histograms with confidence statements

    Full text link
    Multivariate histograms are difficult to construct due to the curse of dimensionality. Motivated by kk-d trees in computer science, we show how to construct an efficient data-adaptive partition of Euclidean space that possesses the following two properties: With high confidence the distribution from which the data are generated is close to uniform on each rectangle of the partition; and despite the data-dependent construction we can give guaranteed finite sample simultaneous confidence intervals for the probabilities (and hence for the average densities) of each rectangle in the partition. This partition will automatically adapt to the sizes of the regions where the distribution is close to uniform. The methodology produces confidence intervals whose widths depend only on the probability content of the rectangles and not on the dimensionality of the space, thus avoiding the curse of dimensionality. Moreover, the widths essentially match the optimal widths in the univariate setting. The simultaneous validity of the confidence intervals allows to use this construction, which we call {\sl Beta-trees}, for various data-analytic purposes. We illustrate this by using Beta-trees for visualizing data and for multivariate mode-hunting

    Optimal Testing of Discrete Distributions with High Probability

    Get PDF
    We study the problem of testing discrete distributions with a focus on the high probability regime. Specifically, given samples from one or more discrete distributions, a property P\mathcal{P}, and parameters 0<ϵ,δ<10< \epsilon, \delta <1, we want to distinguish {\em with probability at least 1−δ1-\delta} whether these distributions satisfy P\mathcal{P} or are ϵ\epsilon-far from P\mathcal{P} in total variation distance. Most prior work in distribution testing studied the constant confidence case (corresponding to δ=Ω(1)\delta = \Omega(1)), and provided sample-optimal testers for a range of properties. While one can always boost the confidence probability of any such tester by black-box amplification, this generic boosting method typically leads to sub-optimal sample bounds. Here we study the following broad question: For a given property P\mathcal{P}, can we {\em characterize} the sample complexity of testing P\mathcal{P} as a function of all relevant problem parameters, including the error probability δ\delta? Prior to this work, uniformity testing was the only statistical task whose sample complexity had been characterized in this setting. As our main results, we provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors, as a function of all relevant parameters. We also show matching information-theoretic lower bounds on the sample complexity of these problems. Our techniques naturally extend to give optimal testers for related problems. To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples

    Optimal Testing of Discrete Distributions with High Probability

    Get PDF
    We study the problem of testing discrete distributions with a focus on the high probability regime. Specifically, given samples from one or more discrete distributions, a property P\mathcal{P}, and parameters 0<ϵ,δ<10< \epsilon, \delta <1, we want to distinguish {\em with probability at least 1−δ1-\delta} whether these distributions satisfy P\mathcal{P} or are ϵ\epsilon-far from P\mathcal{P} in total variation distance. Most prior work in distribution testing studied the constant confidence case (corresponding to δ=Ω(1)\delta = \Omega(1)), and provided sample-optimal testers for a range of properties. While one can always boost the confidence probability of any such tester by black-box amplification, this generic boosting method typically leads to sub-optimal sample bounds. Here we study the following broad question: For a given property P\mathcal{P}, can we {\em characterize} the sample complexity of testing P\mathcal{P} as a function of all relevant problem parameters, including the error probability δ\delta? Prior to this work, uniformity testing was the only statistical task whose sample complexity had been characterized in this setting. As our main results, we provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors, as a function of all relevant parameters. We also show matching information-theoretic lower bounds on the sample complexity of these problems. Our techniques naturally extend to give optimal testers for related problems. To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples
    corecore