7 research outputs found

    Invited Commentary: Treatment Drop-in-Making the Case for Causal Prediction.

    Get PDF
    Clinical prediction models (CPMs) are often used to guide treatment initiation, with individuals at high risk offered treatment. This implicitly assumes that the probability quoted from a CPM represents the risk to an individual of an adverse outcome in absence of treatment. However, for a CPM to correctly target this estimand requires careful causal thinking. One problem that needs to be overcome is treatment drop-in: where individuals in the development data commence treatment after the time of prediction but before the outcome occurs. In this issue of the Journal, Xu et al. (Am J Epidemiol. 2021;190(10):2000-2014) use causal estimates from external data sources, such as clinical trials, to adjust CPMs for treatment drop-in. This represents a pragmatic and promising approach to address this issue, and it illustrates the value of utilizing causal inference in prediction. Building causality into the prediction pipeline can also bring other benefits. These include the ability to make and compare hypothetical predictions under different interventions, to make CPMs more explainable and transparent, and to improve model generalizability. Enriching CPMs with causal inference therefore has the potential to add considerable value to the role of prediction in healthcare

    ICMHI 2021 (2021)

    Get PDF
    Causal artificial intelligence aims at developing bias-robust models that can be used to intervene on, rather than just be predictive, of risks or outcomes. However, learning interventional models from observational data, including electronic health records (EHR), is challenging due to inherent bias, e.g., protopathic, confounding, collider. When estimating the effects of treatment interventions, classical approaches like propensity score matching are often used, but they pose limitations with large feature sets, nonlinear/nonparallel treatment group assignments, and collider bias. In this work, we used data from a large EHR consortium -OneFlorida- and evaluated causal statistical/machine learning methods for determining the effect of statin treatment on the risk of Alzheimer's disease, a debated clinical research question. We introduced a combination of directed acyclic graph (DAG) learning and comparison with expert's design, with calculation of the generalized adjustment criterion (GAC), to find an optimal set of covariates for estimation of treatment effects -ameliorating collider bias. The DAG/CAC approach was assessed together with traditional propensity score matching, inverse probability weighting, virtual-twin/counterfactual random forests, and deep counterfactual networks. We showed large heterogeneity in effect estimates upon different model configurations. Our results did not exclude a protective effect of statins, where the DAG/GAC point estimate aligned with the maximum credibility estimate, although the 95% credibility interval included a null effect, warranting further studies and replication.U18 DP006512/DP/NCCDPHP CDC HHSUnited States/R21 CA245858/CA/NCI NIH HHSUnited States/UL1 TR001427/TR/NCATS NIH HHSUnited States/R01 CA246418/CA/NCI NIH HHSUnited States/U18DP006512/ACL/ACL HHSUnited States/R21 AG068717/AG/NIA NIH HHSUnited States

    How to develop, externally validate, and update multinomial prediction models

    Full text link
    Multinomial prediction models (MPMs) have a range of potential applications across healthcare where the primary outcome of interest has multiple nominal or ordinal categories. However, the application of MPMs is scarce, which may be due to the added methodological complexities that they bring. This article provides a guide of how to develop, externally validate, and update MPMs. Using a previously developed and validated MPM for treatment outcomes in rheumatoid arthritis as an example, we outline guidance and recommendations for producing a clinical prediction model using multinomial logistic regression. This article is intended to supplement existing general guidance on prediction model research. This guide is split into three parts: 1) Outcome definition and variable selection, 2) Model development, and 3) Model evaluation (including performance assessment, internal and external validation, and model recalibration). We outline how to evaluate and interpret the predictive performance of MPMs. R code is provided. We recommend the application of MPMs in clinical settings where the prediction of a nominal polytomous outcome is of interest. Future methodological research could focus on MPM-specific considerations for variable selection and sample size criteria for external validation

    Prediction or causality? A scoping review of their conflation within current observational research.

    Get PDF
    Etiological research aims to uncover causal effects, whilst prediction research aims to forecast an outcome with the best accuracy. Causal and prediction research usually require different methods, and yet their findings may get conflated when reported and interpreted. The aim of the current study is to quantify the frequency of conflation between etiological and prediction research, to discuss common underlying mistakes and provide recommendations on how to avoid these. Observational cohort studies published in January 2018 in the top-ranked journals of six distinct medical fields (Cardiology, Clinical Epidemiology, Clinical Neurology, General and Internal Medicine, Nephrology and Surgery) were included for the current scoping review. Data on conflation was extracted through signaling questions. In total, 180 studies were included. Overall, 26% (n = 46) contained conflation between etiology and prediction. The frequency of conflation varied across medical field and journal impact factor. From the causal studies 22% was conflated, mainly due to the selection of covariates based on their ability to predict without taking the causal structure into account. Within prediction studies 38% was conflated, the most frequent reason was a causal interpretation of covariates included in a prediction model. Conflation of etiology and prediction is a common methodological error in observational medical research and more frequent in prediction studies. As this may lead to biased estimations and erroneous conclusions, researchers must be careful when designing, interpreting and disseminating their research to ensure this conflation is avoided

    Prediction or causality? A scoping review of their conflation within current observational research.

    Get PDF
    Etiological research aims to uncover causal effects, whilst prediction research aims to forecast an outcome with the best accuracy. Causal and prediction research usually require different methods, and yet their findings may get conflated when reported and interpreted. The aim of the current study is to quantify the frequency of conflation between etiological and prediction research, to discuss common underlying mistakes and provide recommendations on how to avoid these. Observational cohort studies published in January 2018 in the top-ranked journals of six distinct medical fields (Cardiology, Clinical Epidemiology, Clinical Neurology, General and Internal Medicine, Nephrology and Surgery) were included for the current scoping review. Data on conflation was extracted through signaling questions. In total, 180 studies were included. Overall, 26% (n?=?46) contained conflation between etiology and prediction. The frequency of conflation varied across medical field and journal impact factor. From the causal studies 22% was conflated, mainly due to the selection of covariates based on their ability to predict without taking the causal structure into account. Within prediction studies 38% was conflated, the most frequent reason was a causal interpretation of covariates included in a prediction model. Conflation of etiology and prediction is a common methodological error in observational medical research and more frequent in prediction studies. As this may lead to biased estimations and erroneous conclusions, researchers must be careful when designing, interpreting and disseminating their research to ensure this conflation is avoided
    corecore