79 research outputs found

    Bayesian model comparison and model averaging for small-area estimation

    Full text link
    This paper considers small-area estimation with lung cancer mortality data, and discusses the choice of upper-level model for the variation over areas. Inference about the random effects for the areas may depend strongly on the choice of this model, but this choice is not a straightforward matter. We give a general methodology for both evaluating the data evidence for different models and averaging over plausible models to give robust area effect distributions. We reanalyze the data of Tsutakawa [Biometrics 41 (1985) 69--79] on lung cancer mortality rates in Missouri cities, and show the differences in conclusions about the city rates from this methodology.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS205 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A new Bayesian approach for determining the number of components in a finite mixture

    Get PDF
    This article evaluates a new Bayesian approach to determining the number of components in a finite mixture. We evaluate through simulation studies mixtures of normals and latent class mixtures of Bernoulli responses. For normal mixtures we use a “gold standard” set of population models based on a well-known “testbed” data set – the galaxy recession velocity data set of Roeder (1990). For Bernoulli latent class mixtures we consider models for psychiatric diagnosis (Berkhof, van Mechelen and Gelman 2003). The new approach is based on comparing models with different numbers of components through their posterior deviance distributions, based on non-informative or diffuse priors. Simulations show that even large numbers of closely spaced normal components can be identified with sufficiently large samples, while for atent classes with Bernoulli responses identification is more complex, though it again improves with increasing sample size

    Statistical modelling of a terrorist network

    Get PDF
    This paper investigates the group structure in a terrorist network through the latent class model and a Bayesian model comparison method for the number of latent classes. The analysis of the terrorist network is sensitive to the model specification. Under one model it clearly identifies a group containing the leaders and organisers, and the group structure suggests a hierarchy of leaders, trainers and “footsoldiers” who carry out the attacks

    Comparing methods to estimate treatment effects on a continuous outcome in multicentre randomized controlled trials: A simulation study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multicentre randomized controlled trials (RCTs) routinely use randomization and analysis stratified by centre to control for differences between centres and to improve precision. No consensus has been reached on how to best analyze correlated continuous outcomes in such settings. Our objective was to investigate the properties of commonly used statistical models at various levels of clustering in the context of multicentre RCTs.</p> <p>Methods</p> <p>Assuming no treatment by centre interaction, we compared six methods (ignoring centre effects, including centres as fixed effects, including centres as random effects, generalized estimating equation (GEE), and fixed- and random-effects centre-level analysis) to analyze continuous outcomes in multicentre RCTs using simulations over a wide spectrum of intraclass correlation (ICC) values, and varying numbers of centres and centre size. The performance of models was evaluated in terms of bias, precision, mean squared error of the point estimator of treatment effect, empirical coverage of the 95% confidence interval, and statistical power of the procedure.</p> <p>Results</p> <p>While all methods yielded unbiased estimates of treatment effect, ignoring centres led to inflation of standard error and loss of statistical power when within centre correlation was present. Mixed-effects model was most efficient and attained nominal coverage of 95% and 90% power in almost all scenarios. Fixed-effects model was less precise when the number of centres was large and treatment allocation was subject to chance imbalance within centre. GEE approach underestimated standard error of the treatment effect when the number of centres was small. The two centre-level models led to more variable point estimates and relatively low interval coverage or statistical power depending on whether or not heterogeneity of treatment contrasts was considered in the analysis.</p> <p>Conclusions</p> <p>All six models produced unbiased estimates of treatment effect in the context of multicentre trials. Adjusting for centre as a random intercept led to the most efficient treatment effect estimation across all simulations under the normality assumption, when there was no treatment by centre interaction.</p

    Statistical inference: an integrated Bayesianlikelihood approach

    No full text
    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pr

    Statistical modeling of the National Assessment of Educational progress/ Aitkin

    No full text
    xii, 159 hal.; 26 cm
    • …
    corecore