85 research outputs found

    Impact of provision of cardiovascular disease risk estimates to healthcare professionals and patients: a systematic review.

    Get PDF
    OBJECTIVE: To systematically review whether the provision of information on cardiovascular disease (CVD) risk to healthcare professionals and patients impacts their decision-making, behaviour and ultimately patient health. DESIGN: A systematic review. DATA SOURCES: An electronic literature search of MEDLINE and PubMed from 01/01/2004 to 01/06/2013 with no language restriction and manual screening of reference lists of systematic reviews on similar topics and all included papers. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: (1) Primary research published in a peer-reviewed journal; (2) inclusion of participants with no history of CVD; (3) intervention strategy consisted of provision of a CVD risk model estimate to either professionals or patients; and (4) the only difference between the intervention group and control group (or the only intervention in the case of before-after studies) was the provision of a CVD risk model estimate. RESULTS: After duplicates were removed, the initial electronic search identified 9671 papers. We screened 196 papers at title and abstract level and included 17 studies. The heterogeneity of the studies limited the analysis, but together they showed that provision of risk information to patients improved the accuracy of risk perception without decreasing quality of life or increasing anxiety, but had little effect on lifestyle. Providing risk information to physicians increased prescribing of lipid-lowering and blood pressure medication, with greatest effects in those with CVD risk >20% (relative risk for change in prescribing 2.13 (1.02 to 4.63) and 2.38 (1.11 to 5.10) respectively). Overall, there was a trend towards reductions in cholesterol and blood pressure and a statistically significant reduction in modelled CVD risk (-0.39% (-0.71 to -0.07)) after, on average, 12 months. CONCLUSIONS: There seems evidence that providing CVD risk model estimates to professionals and patients improves perceived CVD risk and medical prescribing, with little evidence of harm on psychological well-being.BS was supported by the European Commission Framework 7, EPIC-CVD: Individualised CVD risk assessment: tailoring targeted and cost-effective approaches to Europe's diverse populations, Grant agreement no: 279233. JUS was supported by a National Institute of Health Research (NIHR) Clinical Lectureship.This is the final version of the article. It first appeared from BMJ via http://dx.doi.org/10.1136/bmjopen-2015-00871

    Developing and validating risk prediction models in an individual participant data meta-analysis

    Get PDF
    BACKGROUND: Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. METHODS: A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. RESULTS: The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. CONCLUSIONS: An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction

    Prediction models for clustered data: comparison of a random intercept and standard regression model

    Get PDF
    BACKGROUND: When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. METHODS: Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. RESULTS: The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. CONCLUSION: The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters

    Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist

    Get PDF
    The increasing availability of large combined datasets (or big data), such as those from electronic health records and from individual participant data meta-analyses, provides new opportunities and challenges for researchers developing and validating (including updating) prediction models. These datasets typically include individuals from multiple clusters (such as multiple centres, geographical locations, or different studies). Accounting for clustering is important to avoid misleading conclusions and enables researchers to explore heterogeneity in prediction model performance across multiple centres, regions, or countries, to better tailor or match them to these different clusters, and thus to develop prediction models that are more generalisable. However, this requires prediction model researchers to adopt more specific design, analysis, and reporting methods than standard prediction model studies that do not have any inherent substantial clustering. Therefore, prediction model studies based on clustered data need to be reported differently so that readers can appraise the study methods and findings, further increasing the use and implementation of such prediction models developed or validated from clustered datasets

    Transparent reporting of multivariable prediction models developed or validated using clustered data (TRIPOD-Cluster): explanation and elaboration

    Get PDF
    The TRIPOD-Cluster (transparent reporting of multivariable prediction models developed or validated using clustered data) statement comprises a 19 item checklist, which aims to improve the reporting of studies developing or validating a prediction model in clustered data, such as individual participant data meta-analyses (clustering by study) and electronic health records (clustering by practice or hospital). This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD-Cluster statement is explained in detail and accompanied by published examples of good reporting. The document also serves as a reference of factors to consider when designing, conducting, and analysing prediction model development or validation studies in clustered data. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, authors are recommended to include a completed checklist in their submission

    Measurement error is often neglected in medical literature: a systematic review.

    Get PDF
    OBJECTIVES: In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. STUDY DESIGN AND SETTING: Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. RESULTS: Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. CONCLUSIONS: Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary

    Real-time imputation of missing predictor values in clinical practice

    Get PDF
    Use of prediction models is widely recommended by clinical guidelines, but usually requires complete information on all predictors that is not always available in daily practice. We describe two methods for real-time handling of missing predictor values when using prediction models in practice. We compare the widely used method of mean imputation (M-imp) to a method that personalizes the imputations by taking advantage of the observed patient characteristics. These characteristics may include both prediction model variables and other characteristics (auxiliary variables). The method was implemented using imputation from a joint multivariate normal model of the patient characteristics (joint modeling imputation; JMI). Data from two different cardiovascular cohorts with cardiovascular predictors and outcome were used to evaluate the real-time imputation methods. We quantified the prediction model's overall performance (mean squared error (MSE) of linear predictor), discrimination (c-index), calibration (intercept and slope) and net benefit (decision curve analysis). When compared with mean imputation, JMI substantially improved the MSE (0.10 vs. 0.13), c-index (0.70 vs 0.68) and calibration (calibration-in-the-large: 0.04 vs. 0.06; calibration slope: 1.01 vs. 0.92), especially when incorporating auxiliary variables. When the imputation method was based on an external cohort, calibration deteriorated, but discrimination remained similar. We recommend JMI with auxiliary variables for real-time imputation of missing values, and to update imputation models when implementing them in new settings or (sub)populations.Comment: 17 pages, 6 figures, to be published in European Heart Journal - Digital Health, accepted for MEMTAB 2020 conferenc

    Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)

    Get PDF
    Most clinical specialties have a plethora of studies that develop or validate one or more prediction models, for example, to inform diagnosis or prognosis. Having many prediction model studies in a particular clinical field motivates the need for systematic reviews and meta-analyses, to evaluate and summarise the overall evidence available from prediction model studies, in particular about the predictive performance of existing models. Such reviews are fast emerging, and should be reported completely, transparently, and accurately. To help ensure this type of reporting, this article describes a new reporting guideline for systematic reviews and meta-analyses of prediction model research
    • …
    corecore