444 research outputs found

    Assessing Exposure-Response Trends Using the Disease Risk Score

    Get PDF
    Standardization by a disease risk score (DRS) may be preferable to weighting on the exposure propensity score if the exposure is difficult to model (1), relatively novel (i.e., newly emerging or rapidly-evolving), or extremely rare (2, 3). For exposures with more than two levels, methods are lacking for a DRS-based approach. We present an approach to estimate trends in standardized risk ratios (RRs) based on a regression model that uses a DRS

    Measurement Error and Environmental Epidemiology: a Policy Perspective

    Get PDF
    PURPOSE OF REVIEW: Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making.RECENT FINDINGS: We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure-response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Under a policy perspective, the analyst must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology

    You are smarter than you think: (super) machine learning in context

    Get PDF
    We discuss an article on super learning by Naimi and Balzer in the current issue of this journal in the context of machine learning. We give a brief example that emphasizes the need for human intelligence in the rapidly evolving field of machine learning

    The Epidemiologic toolbox: Identifying, honing, and using the right tools for the job

    Get PDF
    There has been much debate about the relative emphasis of the field of epidemiology on causal inference. We believe this debate does short shrift to the breadth of the field. Epidemiologists answer myriad questions that are not causal and hypothesize about and investigate causal relationships without estimating causal effects. Descriptive studies face significant and often overlooked inferential and interpretational challenges; we briefly articulate some of them and argue that a more detailed treatment of biases that affect single-sample estimation problems would benefit all types of epidemiologic studies. Lumping all questions about causality creates ambiguity about the utility of different conceptual models and causal frameworks; 2 distinct types of causal questions include 1) hypothesis generation and theorization about causal structures and 2) hypothesis-driven causal effect estimation. The potential outcomes framework and causal graph theory help efficiently and reliably guide epidemiologic studies designed to estimate a causal effect to best leverage prior data, avoid cognitive fallacies, minimize biases, and understand heterogeneity in treatment effects. Appropriate matching of theoretical frameworks to research questions can increase the rigor of epidemiologic research and increase the utility of such research to improve public health

    Amplification of Bias Due to Exposure Measurement Error

    Get PDF
    Observational epidemiologic studies typically face challenges of exposure measurement error and confounding. Consider an observational study of the association between a continuous exposure and an outcome, where the exposure variable of primary interest suffers from classical measurement error (i.e., the measured exposures are distributed around the true exposure with independent error). In the absence of exposure measurement error, it is widely recognized that one should control for confounders of the association of interest to obtain an unbiased estimate of the effect of that exposure on the outcome of interest. However, here we show that, in the presence of classical exposure measurement error, the net bias in an estimate of the association of interest may increase upon adjustment for confounders. We offer an analytical expression for calculating the change in net bias in an estimate of the association of interest upon adjustment for a confounder in the presence of classical exposure measurement error, and we illustrate this problem using simulations

    Invited Commentary: Causal Inference Across Space and Time - Quixotic Quest, Worthy Goal, or Both?

    Get PDF
    The g-formula and agent-based models (ABMs) are 2 approaches used to estimate causal effects. In the current issue of the Journal, Murray et al. (Am J Epidemiol. 2017;186(2):131-142) compare the performance of the g-formula and ABMs to estimate causal effects in 3 target populations. In their thoughtful paper, the authors outline several reasons that a causal effect estimated using an ABM may be biased when parameterized from at least 1 source external to the target population. The authors have addressed an important issue in epidemiology: Often causal effect estimates are needed to inform public health decisions in settings without complete data. Because public health decisions are urgent, epidemiologists are frequently called upon to estimate a causal effect from existing data in a separate population rather than perform new data collection activities. The assumptions needed to transport causal effects to a specific target population must be carefully stated and assessed, just as one would explicitly state and analyze the assumptions required to draw internally valid causal inference in a specific study sample. Considering external validity in important target populations increases the impact of epidemiologic studies

    Thirteen Questions about Using Machine Learning in Causal Research (You Won't Believe the Answer to Number 10!)

    Get PDF
    Machine learning is gaining prominence in the health sciences, where much of its use has focused on datadriven prediction. However, machine learning can also be embedded within causal analyses, potentially reducing biases arising from model misspecification. Using a question-and-answer format, we provide an introduction and orientation for epidemiologists interested in using machine learning but concerned about potential bias or loss of rigor due to use of "black box"models. We conclude with sample software code that may lower the barrier to entry to using these techniques

    Marginal Structural Models for Risk or Prevalence Ratios for a Point Exposure Using a Disease Risk Score

    Get PDF
    The disease risk score is a summary score that can be used to control for confounding with a potentially large set of covariates. While less widely used than the exposure propensity score, the disease risk score approach might be useful for novel or unusual exposures, when treatment indications or exposure patterns are rapidly changing, or when more is known about the nature of how covariates cause disease than is known about factors influencing propensity for the exposure of interest. Focusing on the simple case of a binary point exposure, we describe a marginal structural model for estimation of risk (or prevalence) ratios. The proposed model incorporates the disease risk score as an offset in a regression model, and it yields an estimate of a standardized risk ratio where the target population is the exposed group. Simulations are used to illustrate the approach, and an empirical example is provided. Confounder control based on the proposed method might be a useful alternative to approaches based on the exposure propensity score, or as a complement to them

    Reducing Bias Due to Exposure Measurement Error Using Disease Risk Scores

    Get PDF
    Suppose that an investigator wants to estimate an association between a continuous exposure variable and an outcome, adjusting for a set of confounders. If the exposure variable suffers classical measurement error, in which the measured exposures are distributed with independent error around the true exposure, then an estimate of the covariate-Adjusted exposure-outcome association may be biased. We propose an approach to estimate a marginal exposure-outcome association in the setting of classical exposure measurement error using a disease score-based approach to standardization to the exposed sample. First, we show that the proposed marginal estimate of the exposure-outcome association will suffer less bias due to classical measurement error than the covariate-conditional estimate of association when the covariates are predictors of exposure. Second, we show that if an exposure validation study is available with which to assess exposure measurement error, then the proposed marginal estimate of the exposure-outcome association can be corrected for measurement error more efficiently than the covariate-conditional estimate of association. We illustrate both of these points using simulations and an empirical example using data from the Orinda Longitudinal Study of Myopia (California, 1989-2001)

    A WARNING ABOUT USING PREDICTED VALUES TO ESTIMATE DESCRIPTIVE MEASURES

    Get PDF
    In a recent article in the Journal, Ogburn et al. highlighted the issues with using predicted values when estimating associations or effects. While the authors cautioned against using predicted values to estimate associations or effects, they noted that predictions can be useful for descriptive purposes. In this work, we highlight the issues with using individual-level predicted values to estimate population-level descriptive parameter
    • …
    corecore