334 research outputs found

    Reflections on statistical modelling:A conversation with Murray Aitkin

    Get PDF
    A virtual interview with Murray Aitkin by Brian Francis and John Hinde, two of the original members of the Centre for Applied Statistics that Murray created at Lancaster University. The talk ranges over Murray's reflections of a career in statistical modelling and the many different collaborations across the world that have been such a significant part of it

    Using the posterior distribution of deviance to measure evidence of association for rare susceptibility variants

    Get PDF
    Aitkin recently proposed an integrated Bayesian/likelihood approach that he claims is general and simple. We have applied this method, which does not rely on informative prior probabilities or large-sample results, to investigate the evidence of association between disease and the 16 variants in the KDR gene provided by Genetic Analysis Workshop 17. Based on the likelihood of logistic regression models and considering noninformative uniform prior probabilities on the coefficients of the explanatory variables, we used a random walk Metropolis algorithm to simulate the distributions of deviance and deviance difference. The distribution of probability values and the distribution of the proportions of positive deviance differences showed different locations, but the direction of the shift depended on the genetic factor. For the variant with the highest minor allele frequency and for any rare variant, standard logistic regression showed a higher power than the novel approach. For the two variants with the strongest effects on Q1 under a type I error rate of 1%, the integrated approach showed a higher power than standard logistic regression. The advantages and limitations of the integrated Bayesian/likelihood approach should be investigated using additional regions and considering alternative regression models and collapsing methods

    From bioavailability science to regulation of organic chemicals

    Get PDF
    The bioavailability of organic chemicals in soil and sediment is an important area of scientific investigation for environmental scientists, although this area of study remains only partially recognized by regulators and industries working in the environmental sector. Regulators have recently started to consider bioavailability within retrospective risk assessment frameworks for organic chemicals; by doing so, realistic decision-making with regard to polluted environments can be achieved, rather than relying on the traditional approach of using total-extractable concentrations. However, implementation remains difficult because scientific developments on bioavailability are not always translated into ready-to-use approaches for regulators. Similarly, bioavailability remains largely unexplored within prospective regulatory frameworks that address the approval and regulation of organic chemicals. This article discusses bioavailability concepts and methods, as well as possible pathways for the implementation of bioavailability into risk assessment and regulation; in addition, this article offers a simple, pragmatic and justifiable approach for use within retrospective and prospective risk assessmen

    Analysing cognitive test data: Distributions and non-parametric random effects

    Get PDF
    An important assumption in many linear mixed models is that the conditional distribution of the response variable is normal. This assumption is violated when the models are fitted to an outcome variable that counts the number of correctly answered questions in a questionnaire. Examples include investigations of cognitive decline where models are fitted to Mini Mental State Examination scores, the most widely used test to measure global cognition. Mini Mental State Examination scores take integer values in the 0–30 range, and its distribution has strong ceiling and floor effects. This article explores alternative distributions for the outcome variable in mixed models fitted to mini mental state examination scores from a longitudinal study of ageing. Model fit improved when a beta-binomial distribution was chosen as the distribution for the response variable

    Study protocol: SPARCLE – a multi-centre European study of the relationship of environment to participation and quality of life in children with cerebral palsy

    Get PDF
    BACKGROUND: SPARCLE is a nine-centre European epidemiological research study examining the relationship of participation and quality of life to impairment and environment (physical, social and attitudinal) in 8–12 year old children with cerebral palsy. Concepts are adopted from the International Classification of Functioning, Disability and Health which bridges the medical and social models of disability. METHODS/DESIGN: A cross sectional study of children with cerebral palsy sampled from total population databases in 9 European regions. Children were visited by research associates in each country who had been trained together. The main instruments used were KIDSCREEN, Life-H, Strength and Difficulties Questionnaire, Parenting Stress Index. A measure of environment was developed within the study. All instruments were translated according to international guidelines. The potential for bias due to non response and missing data will be examined. After initial analysis using multivariate regression of how the data captured by each instrument relate to impairment and socio-economic characteristics, relationships between the latent traits captured by the instruments will then be analysed using structural equation modelling. DISCUSSION: This study is original in its methods by directly engaging children themselves, ensuring those with learning or communication difficulty are not excluded, and by studying in quantitative terms the crucial outcomes of participation and quality of life. Specification and publication of this protocol prior to analysis, which is not common in epidemiology but well established for randomised controlled trials and systematic reviews, should avoid the pitfalls of data dredging and post hoc analyses

    Understanding Variation in Sets of N-of-1 Trials.

    Get PDF
    A recent paper in this journal by Chen and Chen has used computer simulations to examine a number of approaches to analysing sets of n-of-1 trials. We have examined such designs using a more theoretical approach based on considering the purpose of analysis and the structure as regards randomisation that the design uses. We show that different purposes require different analyses and that these in turn may produce quite different results. Our approach to incorporating the randomisation employed when the purpose is to test a null hypothesis of strict equality of the treatment makes use of Nelder's theory of general balance. However, where the purpose is to make inferences about the effects for individual patients, we show that a mixed model is needed. There are strong parallels to the difference between fixed and random effects meta-analyses and these are discussed

    Comparing methods to estimate treatment effects on a continuous outcome in multicentre randomized controlled trials: A simulation study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multicentre randomized controlled trials (RCTs) routinely use randomization and analysis stratified by centre to control for differences between centres and to improve precision. No consensus has been reached on how to best analyze correlated continuous outcomes in such settings. Our objective was to investigate the properties of commonly used statistical models at various levels of clustering in the context of multicentre RCTs.</p> <p>Methods</p> <p>Assuming no treatment by centre interaction, we compared six methods (ignoring centre effects, including centres as fixed effects, including centres as random effects, generalized estimating equation (GEE), and fixed- and random-effects centre-level analysis) to analyze continuous outcomes in multicentre RCTs using simulations over a wide spectrum of intraclass correlation (ICC) values, and varying numbers of centres and centre size. The performance of models was evaluated in terms of bias, precision, mean squared error of the point estimator of treatment effect, empirical coverage of the 95% confidence interval, and statistical power of the procedure.</p> <p>Results</p> <p>While all methods yielded unbiased estimates of treatment effect, ignoring centres led to inflation of standard error and loss of statistical power when within centre correlation was present. Mixed-effects model was most efficient and attained nominal coverage of 95% and 90% power in almost all scenarios. Fixed-effects model was less precise when the number of centres was large and treatment allocation was subject to chance imbalance within centre. GEE approach underestimated standard error of the treatment effect when the number of centres was small. The two centre-level models led to more variable point estimates and relatively low interval coverage or statistical power depending on whether or not heterogeneity of treatment contrasts was considered in the analysis.</p> <p>Conclusions</p> <p>All six models produced unbiased estimates of treatment effect in the context of multicentre trials. Adjusting for centre as a random intercept led to the most efficient treatment effect estimation across all simulations under the normality assumption, when there was no treatment by centre interaction.</p
    • …
    corecore