22 research outputs found
The impact of epidemiologic methods on findings in studies of causal effects and prediction modelling
Over the last decades, increasingly so in the last years, epidemiological methods have been refined, making it challenging to keep abreast of all methodological developments. The choice of the data analytical method directly influences the interpretation and clinical meaning of results of an analysis, yet it is undesirable that technical considerations define the subject of the investigation. Having a deeper understanding of the impact that data analytical decisions can have on the interpretation of numerical results of a study would help to apply analytical tools that are both suitable and appropriate to answer clinical questions. The aim of this thesis was to investigate the impact of choices regarding the design and statistical analysis of a study on the meaning of its numerical results in two sets of case studies in research into causal effects (Part I) and prediction research (Part II). The thesis concludes with a discussion on the role and importance of clinical research questions and estimands. Clearly defining a clinically relevant estimand ensures that data analytical decisions yield meaningful results. Making targeted research questions central to quantitative clinical research can reduce fallacious confidence in (complex) methods and can add to intelligibility of findings.LUMC / Geneeskund
Exploratory analyses in aetiologic research and considerations for assessment of credibility: mini-review of literature
OBJECTIVETo provide considerations for reporting and interpretation that can improve assessment of the credibility of exploratory analyses in aetiologic research.DESIGNMini-review of the literature and account of exploratory research principles.SETTINGThis study focuses on a particular type of causal research, namely aetiologic studies, which investigate the causal effect of one or multiple risk factors on a particular health outcome or disease. The mini review included aetiologic research articles published in four epidemiology journals in the first issue of 2021: American Journal of Epidemiology, Epidemiology, European Journal of Epidemiology, and International Journal of Epidemiology, specifically focusing on observational studies of causal risk factors of diseases.MAIN OUTCOME MEASURESNumber of exposure-outcome associations reported, grouped by type of analysis (main, sensitivity, and additional).RESULTSThe journal articles reported many exposure-outcome associations: a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses. Six considerations were discussed that are important in assessing the credibility of exploratory analyses: research problem, protocol, statistical criteria, interpretation of findings, completeness of reporting, and effect of exploratory findings on future causal research.CONCLUSIONSBased on this mini-review, exploratory analyses in aetiologic research were not always reported properly. Six considerations for reporting of exploratory analyses in aetiologic research were provided to stimulate a discussion about their preferred handling and reporting. Researchers should take responsibility for the results of exploratory analyses by clearly reporting their exploratory nature and specifying which findings should be investigated in future research and how.Clinical epidemiolog
A comparison of full model specification and backward elimination of potential confounders when estimating marginal and conditional causal effects on binary outcomes from observational data
A common view in epidemiology is that automated confounder selection methods, such as backward elimination, should be avoided as they can lead to biased effect estimates and underestimation of their variance. Nevertheless, backward elimination remains regularly applied. We investigated if and under which conditions causal effect estimation in observational studies can improve by using backward elimination on a prespecified set of potential confounders. An expression was derived that quantifies how variable omission relates to bias and variance of effect estimators. Additionally, 3960 scenarios were defined and investigated by simulations comparing bias and mean squared error (MSE) of the conditional log odds ratio, log(cOR), and the marginal log risk ratio, log(mRR), between full models including all prespecified covariates and backward elimination of these covariates. Applying backward elimination resulted in a mean bias of 0.03 for log(cOR) and 0.02 for log(mRR), compared to 0.56 and 0.52 for log(cOR) and log(mRR), respectively, for a model without any covariate adjustment, and no bias for the full model. In less than 3% of the scenarios considered, the MSE of the log(cOR) or log(mRR) was slightly lower (max 3%) when backward elimination was used compared to the full model. When an initial set of potential confounders can be specified based on background knowledge, there is minimal added value of backward elimination. We advise not to use it and otherwise to provide ample arguments supporting its use.Clinical epidemiolog
New-user and prevalent-user designs and the definition of study time origin in pharmacoepidemiology: a review of reporting practices
Background Guidance reports for observational comparative effectiveness and drug safety research recommend implementing a new-user design whenever possible, since it reduces the risk of selection bias in exposure effect estimation compared to a prevalent-user design. The uptake of this guidance has not been studied extensively.Methods We reviewed 89 observational effectiveness and safety cohort studies published in six pharmacoepidemiological journals in 2018 and 2019. We developed an extraction tool to assess how frequently new-user and prevalent-user designs were reported to be implemented. For studies that implemented a new-user design in both treatment arms, we extracted information about the extent to which the moment of meeting eligibility criteria, treatment initiation, and start of follow-up were reported to be aligned.Results Of the 89 studies included, 40% reported implementing a new-user design for both the study exposure arm and the comparator arm, while 13% reported implementing a prevalent-user design in both arms. The moment of meeting eligibility criteria, treatment initiation, and start of follow-up were reported to be aligned in both treatment arms in 53% of studies that reported implementing a new-user design. We provided examples of studies that minimized the risk of introducing bias due to unclear definition of time origin in unexposed participants, immortal time, or a time lag.Conclusions Almost half of the included studies reported implementing a new-user design. Implications of misalignment of study design origin were difficult to assess because it would require explicit reporting of the target estimand in original studies. We recommend that the choice for a particular study time origin is explicitly motivated to enable assessment of validity of the study.Clinical epidemiolog
Recommended from our members
Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal
OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity
Whales, Walruses, and Elephants: Artisans in Ivory, Baleen, and Other Skeletal Materials in Seventeenth- and Eighteenth-Century Amsterdam
New‐user and prevalent‐user designs and the definition of study time origin in pharmacoepidemiology: A review of reporting practices
Background Guidance reports for observational comparative effectiveness and drug safety research recommend implementing a new-user design whenever possible, since it reduces the risk of selection bias in exposure effect estimation compared to a prevalent-user design. The uptake of this guidance has not been studied extensively.Methods We reviewed 89 observational effectiveness and safety cohort studies published in six pharmacoepidemiological journals in 2018 and 2019. We developed an extraction tool to assess how frequently new-user and prevalent-user designs were reported to be implemented. For studies that implemented a new-user design in both treatment arms, we extracted information about the extent to which the moment of meeting eligibility criteria, treatment initiation, and start of follow-up were reported to be aligned.Results Of the 89 studies included, 40% reported implementing a new-user design for both the study exposure arm and the comparator arm, while 13% reported implementing a prevalent-user design in both arms. The moment of meeting eligibility criteria, treatment initiation, and start of follow-up were reported to be aligned in both treatment arms in 53% of studies that reported implementing a new-user design. We provided examples of studies that minimized the risk of introducing bias due to unclear definition of time origin in unexposed participants, immortal time, or a time lag.Conclusions Almost half of the included studies reported implementing a new-user design. Implications of misalignment of study design origin were difficult to assess because it would require explicit reporting of the target estimand in original studies. We recommend that the choice for a particular study time origin is explicitly motivated to enable assessment of validity of the study.Clinical epidemiolog
Impact of predictor measurement heterogeneity across settings on the performance of prediction models: A measurement error perspective
Clinical epidemiolog