58,969 research outputs found

    The Likelihood Ratio Test and Full Bayesian Significance Test under small sample sizes for contingency tables

    Full text link
    Hypothesis testing in contingency tables is usually based on asymptotic results, thereby restricting its proper use to large samples. To study these tests in small samples, we consider the likelihood ratio test and define an accurate index, the P-value, for the celebrated hypotheses of homogeneity, independence, and Hardy-Weinberg equilibrium. The aim is to understand the use of the asymptotic results of the frequentist Likelihood Ratio Test and the Bayesian FBST -- Full Bayesian Significance Test -- under small-sample scenarios. The proposed exact P-value is used as a benchmark to understand the other indices. We perform analysis in different scenarios, considering different sample sizes and different table dimensions. The exact Fisher test for 2×22 \times 2 tables that drastically reduces the sample space is also discussed. The main message of this paper is that all indices have very similar behavior, so the tests based on asymptotic results are very good to be used in any circumstance, even with small sample sizes

    Relational models for contingency tables

    Full text link
    The paper considers general multiplicative models for complete and incomplete contingency tables that generalize log-linear and several other models and are entirely coordinate free. Sufficient conditions of the existence of maximum likelihood estimates under these models are given, and it is shown that the usual equivalence between multinomial and Poisson likelihoods holds if and only if an overall effect is present in the model. If such an effect is not assumed, the model becomes a curved exponential family and a related mixed parameterization is given that relies on non-homogeneous odds ratios. Several examples are presented to illustrate the properties and use of such models

    Making Markov chains less lazy

    Full text link
    The mixing time of an ergodic, reversible Markov chain can be bounded in terms of the eigenvalues of the chain: specifically, the second-largest eigenvalue and the smallest eigenvalue. It has become standard to focus only on the second-largest eigenvalue, by making the Markov chain "lazy". (A lazy chain does nothing at each step with probability at least 1/2, and has only nonnegative eigenvalues.) An alternative approach to bounding the smallest eigenvalue was given by Diaconis and Stroock and Diaconis and Saloff-Coste. We give examples to show that using this approach it can be quite easy to obtain a bound on the smallest eigenvalue of a combinatorial Markov chain which is several orders of magnitude below the best-known bound on the second-largest eigenvalue.Comment: 8 page
    • …
    corecore