121,703 research outputs found

    Two-sample Bayesian Nonparametric Hypothesis Testing

    Full text link
    In this article we describe Bayesian nonparametric procedures for two-sample hypothesis testing. Namely, given two sets of samples y(1)  \mathbf{y}^{\scriptscriptstyle(1)}\;\stackrel{\scriptscriptstyle{iid}}{\s im}  F(1)\;F^{\scriptscriptstyle(1)} and y(2)  \mathbf{y}^{\scriptscriptstyle(2 )}\;\stackrel{\scriptscriptstyle{iid}}{\sim}  F(2)\;F^{\scriptscriptstyle( 2)}, with F(1),F(2)F^{\scriptscriptstyle(1)},F^{\scriptscriptstyle(2)} unknown, we wish to evaluate the evidence for the null hypothesis H0:F(1)≡F(2)H_0:F^{\scriptscriptstyle(1)}\equiv F^{\scriptscriptstyle(2)} versus the alternative H1:F(1)≠F(2)H_1:F^{\scriptscriptstyle(1)}\neq F^{\scriptscriptstyle(2)}. Our method is based upon a nonparametric P\'{o}lya tree prior centered either subjectively or using an empirical procedure. We show that the P\'{o}lya tree prior leads to an analytic expression for the marginal likelihood under the two hypotheses and hence an explicit measure of the probability of the null Pr(H0∣{y(1),y(2)})\mathrm{Pr}(H_0|\{\mathbf {y}^{\scriptscriptstyle(1)},\mathbf{y}^{\scriptscriptstyle(2)}\}\mathbf{)}.Comment: Published at http://dx.doi.org/10.1214/14-BA914 in the Bayesian Analysis (http://projecteuclid.org/euclid.ba) by the International Society of Bayesian Analysis (http://bayesian.org/

    Recursive Partitioning for Heterogeneous Causal Effects

    Full text link
    In this paper we study the problems of estimating heterogeneity in causal effects in experimental or observational studies and conducting inference about the magnitude of the differences in treatment effects across subsets of the population. In applications, our method provides a data-driven approach to determine which subpopulations have large or small treatment effects and to test hypotheses about the differences in these effects. For experiments, our method allows researchers to identify heterogeneity in treatment effects that was not specified in a pre-analysis plan, without concern about invalidating inference due to multiple testing. In most of the literature on supervised machine learning (e.g. regression trees, random forests, LASSO, etc.), the goal is to build a model of the relationship between a unit's attributes and an observed outcome. A prominent role in these methods is played by cross-validation which compares predictions to actual outcomes in test samples, in order to select the level of complexity of the model that provides the best predictive power. Our method is closely related, but it differs in that it is tailored for predicting causal effects of a treatment rather than a unit's outcome. The challenge is that the "ground truth" for a causal effect is not observed for any individual unit: we observe the unit with the treatment, or without the treatment, but not both at the same time. Thus, it is not obvious how to use cross-validation to determine whether a causal effect has been accurately predicted. We propose several novel cross-validation criteria for this problem and demonstrate through simulations the conditions under which they perform better than standard methods for the problem of causal effects. We then apply the method to a large-scale field experiment re-ranking results on a search engine

    Consistent distribution-free KK-sample and independence tests for univariate random variables

    Full text link
    A popular approach for testing if two univariate random variables are statistically independent consists of partitioning the sample space into bins, and evaluating a test statistic on the binned data. The partition size matters, and the optimal partition size is data dependent. While for detecting simple relationships coarse partitions may be best, for detecting complex relationships a great gain in power can be achieved by considering finer partitions. We suggest novel consistent distribution-free tests that are based on summation or maximization aggregation of scores over all partitions of a fixed size. We show that our test statistics based on summation can serve as good estimators of the mutual information. Moreover, we suggest regularized tests that aggregate over all partition sizes, and prove those are consistent too. We provide polynomial-time algorithms, which are critical for computing the suggested test statistics efficiently. We show that the power of the regularized tests is excellent compared to existing tests, and almost as powerful as the tests based on the optimal (yet unknown in practice) partition size, in simulations as well as on a real data example.Comment: arXiv admin note: substantial text overlap with arXiv:1308.155

    Testing the existence of clustering in the extreme values

    Get PDF
    This paper introduces an estimator for the extremal index as the ratio of the number of elements of two point processes defined by threshold sequences un, vn and a partition of the sequence in different blocks of the same size. The first point process is defined by the sequence of the block maxima that exceed un. This paper introduces a thinning of this point process, defined by a threshold vn with vn > un, and with the appealing property that under some mild conditions the ratio of the number of elements of both point processes is a consistent estimator of the extremal index. The method supports a hypothesis test for the extremal index, and hence for testing the existence of clustering in the extreme values. Other advantages are that it allows some freedom to choose un, and it is not very sensitive to the choice of the partition. Finally, the stylized facts found in financial returns (clustering, skewness, heavy tails) are tested via the extremal index, in this case for the DaX return

    Local Exchangeability

    Full text link
    Exchangeability---in which the distribution of an infinite sequence is invariant to reorderings of its elements---implies the existence of a simple conditional independence structure that may be leveraged in the design of probabilistic models, efficient inference algorithms, and randomization-based testing procedures. In practice, however, this assumption is too strong an idealization; the distribution typically fails to be exactly invariant to permutations and de Finetti's representation theory does not apply. Thus there is the need for a distributional assumption that is both weak enough to hold in practice, and strong enough to guarantee a useful underlying representation. We introduce a relaxed notion of local exchangeability---where swapping data associated with nearby covariates causes a bounded change in the distribution. We prove that locally exchangeable processes correspond to independent observations from an underlying measure-valued stochastic process. We thereby show that de Finetti's theorem is robust to perturbation and provide further justification for the Bayesian modelling approach. Using this probabilistic result, we develop three novel statistical procedures for (1) estimating the underlying process via local empirical measures, (2) testing via local randomization, and (3) estimating the canonical premetric of local exchangeability. These three procedures extend the applicability of previous exchangeability-based methods without sacrificing rigorous statistical guarantees. The paper concludes with examples of popular statistical models that exhibit local exchangeability
    • …
    corecore