140 research outputs found

    Power, Reliability, and Heterogeneous Results

    Get PDF
    N/

    Simulation Experiments as a Causal Problem

    Full text link
    Simulation methods are among the most ubiquitous methodological tools in statistical science. In particular, statisticians often is simulation to explore properties of statistical functionals in models for which developed statistical theory is insufficient or to assess finite sample properties of theoretical results. We show that the design of simulation experiments can be viewed from the perspective of causal intervention on a data generating mechanism. We then demonstrate the use of causal tools and frameworks in this context. Our perspective is agnostic to the particular domain of the simulation experiment which increases the potential impact of our proposed approach. In this paper, we consider two illustrative examples. First, we re-examine a predictive machine learning example from a popular textbook designed to assess the relationship between mean function complexity and the mean-squared error. Second, we discuss a traditional causal inference method problem, simulating the effect of unmeasured confounding on estimation, specifically to illustrate bias amplification. In both cases, applying causal principles and using graphical models with parameters and distributions as nodes in the spirit of influence diagrams can 1) make precise which estimand the simulation targets , 2) suggest modifications to better attain the simulation goals, and 3) provide scaffolding to discuss performance criteria for a particular simulation design.Comment: 19 pages, 17 figures. Under review at Statistical Scienc

    Ratio data: Understanding pitfalls and knowing when to standardise

    Get PDF
    Ratios represent a single-value metric but consist of two component parts: a numerator variable and a denominator variable. Strictly speaking, a ratio is defined as: “the quantitative relation between two amounts showing the number of times one value contains or is contained by another”. When we discuss symmetry in sport science, we are generally comparing values of some metric between left and right sides or between agonist and antagonist muscles. The typical practice is to express the comparison as a ratio (differences are also a way of standardizing under different assumptions), such as the injured limb having only 60% of the strength of the uninjured limb. Conceptually though, we are using the ratio as one way to standardize the value of one variable with respect to another. Despite their common use, the interpretation of ratio standardisation, whether for symmetry or other reasons, often provides challenges, some of which are not always obvious to practitioners. Typically, when monitoring a change in ratios, if an intervention affects both the numerator and denominator, there will likely be challenges in interpreting the ratio appropriately. Therefore, the aim of this editorial is to use some examples to highlight when using this form of standardisation may be helpful, and when using it can lead to misinterpretations

    Data-driven methods distort optimal cutoffs and accuracy estimates of depression screening tools: a simulation study using individual participant data

    Get PDF
    To evaluate, across multiple sample sizes, the degree that data-driven methods result in (1) optimal cutoffs different from population optimal cutoff and (2) bias in accuracy estimates.NIMH -National Institute of Mental Health(13/00

    Reducing bias through directed acyclic graphs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The objective of most biomedical research is to determine an unbiased estimate of effect for an exposure on an outcome, i.e. to make causal inferences about the exposure. Recent developments in epidemiology have shown that traditional methods of identifying confounding and adjusting for confounding may be inadequate.</p> <p>Discussion</p> <p>The traditional methods of adjusting for "potential confounders" may introduce conditional associations and bias rather than minimize it. Although previous published articles have discussed the role of the causal directed acyclic graph approach (DAGs) with respect to confounding, many clinical problems require complicated DAGs and therefore investigators may continue to use traditional practices because they do not have the tools necessary to properly use the DAG approach. The purpose of this manuscript is to demonstrate a simple 6-step approach to the use of DAGs, and also to explain why the method works from a conceptual point of view.</p> <p>Summary</p> <p>Using the simple 6-step DAG approach to confounding and selection bias discussed is likely to reduce the degree of bias for the effect estimate in the chosen statistical model.</p
    corecore