26 research outputs found

    Guide to presenting clinical prediction models for use in clinical settings

    Get PDF
    For permission to use (where not already granted under a licence) please go to. Clinical prediction models estimate the risk of existing disease or future outcome for an individual, which is conditional on the values of multiple predictors such as age, sex, and biomarkers. In this article, Bonnett and colleagues provide a guide to presenting clinical prediction models so that they can be implemented in practice, if appropriate. They describe how to create four presentation formats and discuss the advantages and disadvantages of each format. A key message is the need for stakeholder engagement to determine the best presentation option in relation to the clinical context of use and the intended users

    Comparison and reproducibility of standard and high temporal resolution myocardial tissue tagging in patients with severe aortic stenosis

    Get PDF
    Objectives The aim of this study was to compare and assess the reproducibility of left ventricular (LV) circumferential peak systolic strain (PeakEcc) and strain rate (SR) measurements using standard and high temporal resolution myocardial tissue tagging in patients with severe aortic stenosis (AS). Background Myocardial tissue tagging with cardiac magnetic resonance (CMR) can be used to quantify strain and SR, however, there are little data on the reproducibility. Diastolic SR may be of particular interest as it may be the most sensitive marker of diastolic dysfunction often occurring early in the course of disease. Methods Eight patients with isolated severe AS without obstructive coronary artery disease were prospectively enrolled. They underwent CMR in a 1.5T scanner (Siemens Avanto) on two separate occasions, median interval 12 days. Complementary tagged (CSPAMM) images were acquired with both a single breath-hold (SBH: temporal resolution 42ms), and a multiple brief expiration breath-hold (MBH: high temporal resolution 17ms) sequence. Mid-wall PeakEcc was measured in the LV at mid-ventricular level with HARP Version 2.7 (Diagnosoft, USA). SR was calculated from the strain data; SR=Ecc2-Ecc1/Time2-Time1. PeakEcc , peak systolic and diastolic SR were read from curves of strain and SR against time. The MBH SR curves were filtered with a moving average (MA) to reduce noise sensitivity, results from a sample width of three and five were examined. Differences between SBH and MBH were assessed using Wilcoxon signed-rank test as not all measures were normally distributed. Reproducibility assessments were carried out on all techniques. Results PeakEcc was significantly higher with MBH vs. SBH, but reproducibility was slightly worse. Results are summarised in Table 1. Systolic SR was approximately equal with all techniques although MBH using MA of five led to a borderline significant reduction. Diastolic SR was higher when measured with MBH although only significant using MA of three. Systolic and diastolic SR measures were more reproducible with MBH compared with SBH, except for the diastolic SR using MA of three, which was substantially worse. Strain and SR curves for the same patient are shown in Figure 1

    Predictors of outcome in sciatica patients following an epidural steroid injection:the POiSE prospective observational cohort study protocol

    Get PDF
    INTRODUCTION: Sciatica can be very painful and, in most cases, is due to pressure on a spinal nerve root from a disc herniation with associated inflammation. For some patients, the pain persists, and one management option is a spinal epidural steroid injection (ESI). The aim of an ESI is to relieve leg pain, improve function and reduce the need for surgery. ESIs work well in some patients but not in others, but we cannot identify these patient subgroups currently. This study aims to identify factors, including patient characteristics, clinical examination and imaging findings, that help in predicting who does well and who does not after an ESI. The overall objective is to develop a prognostic model to support individualised patient and clinical decision-making regarding ESI. METHODS: POiSE is a prospective cohort study of 439 patients with sciatica referred by their clinician for an ESI. Participants will receive weekly text messages until 12 weeks following their ESIand then again at 24 weeks following their ESI to collect data on leg pain severity. Questionnaires will be sent to participants at baseline, 6, 12 and 24 weeks after their ESI to collect data on pain, disability, recovery and additional interventions. The prognosis for the cohort will be described. The primary outcome measure for the prognostic model is leg pain at 6 weeks. Prognostic models will also be developed for secondary outcomes of disability and recovery at 6 weeks and additional interventions at 24 weeks following ESI. Statistical analyses will include multivariable linear and logistic regression with mixed effects model. ETHICS AND DISSEMINATION: The POiSE study has received ethical approval (South Central Berkshire B Research Ethics Committee 21/SC/0257). Dissemination will be guided by our patient and public engagement group and will include scientific publications, conference presentations and social media.</p

    Evaluation of clinical prediction models (part 2): how to undertake an external validation study

    Get PDF
    External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness

    Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist

    Get PDF
    The increasing availability of large combined datasets (or big data), such as those from electronic health records and from individual participant data meta-analyses, provides new opportunities and challenges for researchers developing and validating (including updating) prediction models. These datasets typically include individuals from multiple clusters (such as multiple centres, geographical locations, or different studies). Accounting for clustering is important to avoid misleading conclusions and enables researchers to explore heterogeneity in prediction model performance across multiple centres, regions, or countries, to better tailor or match them to these different clusters, and thus to develop prediction models that are more generalisable. However, this requires prediction model researchers to adopt more specific design, analysis, and reporting methods than standard prediction model studies that do not have any inherent substantial clustering. Therefore, prediction model studies based on clustered data need to be reported differently so that readers can appraise the study methods and findings, further increasing the use and implementation of such prediction models developed or validated from clustered datasets

    Transparent reporting of multivariable prediction models developed or validated using clustered data (TRIPOD-Cluster): explanation and elaboration

    Get PDF
    The TRIPOD-Cluster (transparent reporting of multivariable prediction models developed or validated using clustered data) statement comprises a 19 item checklist, which aims to improve the reporting of studies developing or validating a prediction model in clustered data, such as individual participant data meta-analyses (clustering by study) and electronic health records (clustering by practice or hospital). This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD-Cluster statement is explained in detail and accompanied by published examples of good reporting. The document also serves as a reference of factors to consider when designing, conducting, and analysing prediction model development or validation studies in clustered data. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, authors are recommended to include a completed checklist in their submission

    Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)

    Get PDF
    Most clinical specialties have a plethora of studies that develop or validate one or more prediction models, for example, to inform diagnosis or prognosis. Having many prediction model studies in a particular clinical field motivates the need for systematic reviews and meta-analyses, to evaluate and summarise the overall evidence available from prediction model studies, in particular about the predictive performance of existing models. Such reviews are fast emerging, and should be reported completely, transparently, and accurately. To help ensure this type of reporting, this article describes a new reporting guideline for systematic reviews and meta-analyses of prediction model research

    Musculoskeletal Health and Work: Development and Internal–External Cross-Validation of a Model to Predict Risk of Work Absence and Presenteeism in People Seeking Primary Healthcare

    Get PDF
    Purpose To develop and validate prediction models for the risk of future work absence and level of presenteeism, in adults seeking primary healthcare with musculoskeletal disorders (MSD). Methods Six studies from the West-Midlands/Northwest regions of England, recruiting adults consulting primary care with MSD were included for model development and internal–external cross-validation (IECV). The primary outcome was any work absence within 6 months of their consultation. Secondary outcomes included 6-month presenteeism and 12-month work absence. Ten candidate predictors were included: age; sex; multisite pain; baseline pain score; pain duration; job type; anxiety/depression; comorbidities; absence in the previous 6 months; and baseline presenteeism. Results For the 6-month absence model, 2179 participants (215 absences) were available across five studies. Calibration was promising, although varied across studies, with a pooled calibration slope of 0.93 (95% CI: 0.41–1.46) on IECV. On average, the model discriminated well between those with work absence within 6 months, and those without (IECV-pooled C-statistic 0.76, 95% CI: 0.66–0.86). The 6-month presenteeism model, while well calibrated on average, showed some individual-level variation in predictive accuracy, and the 12-month absence model was poorly calibrated due to the small available size for model development. Conclusions The developed models predict 6-month work absence and presenteeism with reasonable accuracy, on average, in adults consulting with MSD. The model to predict 12-month absence was poorly calibrated and is not yet ready for use in practice. This information may support shared decision-making and targeting occupational health interventions at those with a higher risk of absence or presenteeism in the 6 months following consultation. Further external validation is needed before the models’ use can be recommended or their impact on patients can be fully assessed

    Predicting the risk of acute kidney injury in primary care: derivation and validation of STRATIFY-AKI

    Get PDF
    BACKGROUND: Antihypertensives reduce the risk of cardiovascular disease but are also associated with harms including acute kidney injury (AKI). Few data exist to guide clinical decision making regarding these risks. AIM: To develop a prediction model estimating the risk of AKI in people potentially indicated for antihypertensive treatment. DESIGN AND SETTING: Observational cohort study using routine primary care data from the Clinical Practice Research Datalink (CPRD) in England. METHOD: People aged ≥40 years, with at least one blood pressure measurement between 130 mmHg and 179 mmHg were included. Outcomes were admission to hospital or death with AKI within 1, 5, and 10 years. The model was derived with data from CPRD GOLD (n = 1 772 618), using a Fine-Gray competing risks approach, with subsequent recalibration using pseudo-values. External validation used data from CPRD Aurum (n = 3 805 322). RESULTS: The mean age of participants was 59.4 years and 52% were female. The final model consisted of 27 predictors and showed good discrimination at 1, 5, and 10 years (C-statistic for 10-year risk 0.821, 95% confidence interval [CI] = 0.818 to 0.823). There was some overprediction at the highest predicted probabilities (ratio of observed to expected event probability for 10-year risk 0.633, 95% CI = 0.621 to 0.645), affecting patients with the highest risk. Most patients (>95%) had a low 1- to 5-year risk of AKI, and at 10 years only 0.1% of the population had a high AKI and low CVD risk. CONCLUSION: This clinical prediction model enables GPs to accurately identify patients at high risk of AKI, which will aid treatment decisions. As the vast majority of patients were at low risk, such a model may provide useful reassurance that most antihypertensive treatment is safe and appropriate while flagging the few for whom this is not the case

    Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models: a review shows improvements are needed

    Get PDF
    Objectives: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement. Study Design and Setting: We searched PubMed (January 2018–May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs. Results: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD. Conclusion: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this
    corecore