68,640 research outputs found

    Bootstraping the general linear hypothesis test

    Get PDF
    We discuss the use of bootstrap methodology in hypothesis testing, focusing on the classical F-test for linear hypotheses in the linear model. A modification of the F-statistics which allows for resampling under the null hypothesis is proposed. This approach is specifically considered in the one-way analysis of variance model. A simulation study illustrating the behaviour of our proposal is presented

    Local Variation as a Statistical Hypothesis Test

    Full text link
    The goal of image oversegmentation is to divide an image into several pieces, each of which should ideally be part of an object. One of the simplest and yet most effective oversegmentation algorithms is known as local variation (LV) (Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and show that algorithms similar to LV can be devised by applying different statistical models and decisions, thus providing further theoretical justification and a well-founded explanation for the unexpected high performance of the LV approach. Some of these algorithms are based on statistics of natural images and on a hypothesis testing decision; we denote these algorithms probabilistic local variation (pLV). The best pLV algorithm, which relies on censored estimation, presents state-of-the-art results while keeping the same computational complexity of the LV algorithm

    A Simple Hypothesis Test for Heteroscedasticity

    Get PDF
    Abstract: The scope of this paper is the presentation of a simple hypothesis test that enables to discern heteroscedastic data from homoscedastic i.i.d. gaussian white noise. The main feature will be a test statistic that’s easy applicable and serves well in committing such a test. The power of the statistic will be underlined by examples where it is applied to stock market data and time series from deterministic diffusion a chaotic time series process. It will turn out that in those cases the statistic rejects with a high degree of confidence the random walk hypothesis and is therefore highly reliable. Furthermore it will be discussed, that the test in most cases also may serve as a test for independence and heteroscedasticity in general. This will be exemplified by independent and equally distributed random numbers.Heteroscedasticity, Hypothesis Test, Independence, Random Walk

    Hypothesis test for normal mixture models: The EM approach

    Full text link
    Normal mixture distributions are arguably the most important mixture models, and also the most technically challenging. The likelihood function of the normal mixture model is unbounded based on a set of random samples, unless an artificial bound is placed on its component variance parameter. Moreover, the model is not strongly identifiable so it is hard to differentiate between over dispersion caused by the presence of a mixture and that caused by a large variance, and it has infinite Fisher information with respect to mixing proportions. There has been extensive research on finite normal mixture models, but much of it addresses merely consistency of the point estimation or useful practical procedures, and many results require undesirable restrictions on the parameter space. We show that an EM-test for homogeneity is effective at overcoming many challenges in the context of finite normal mixtures. We find that the limiting distribution of the EM-test is a simple function of the 0.5χ02+0.5χ120.5\chi^2_0+0.5\chi^2_1 and χ12\chi^2_1 distributions when the mixing variances are equal but unknown and the χ22\chi^2_2 when variances are unequal and unknown. Simulations show that the limiting distributions approximate the finite sample distribution satisfactorily. Two genetic examples are used to illustrate the application of the EM-test.Comment: Published in at http://dx.doi.org/10.1214/08-AOS651 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A nonparametric hypothesis test via the Bootstrap resampling

    Get PDF
    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.Hypothesis test; the bootstrap; nonparametric regression; omitted variables
    • …
    corecore