37 research outputs found

    Development and external validation of the eFalls tool:a multivariable prediction model for the risk of ED attendance or hospitalisation with a fall or fracture in older adults

    Get PDF
    BackgroundFalls are common in older adults and can devastate personal independence through injury such as fracture and fear of future falls. Methods to identify people for falls prevention interventions are currently limited, with high risks of bias in published prediction models. We have developed and externally validated the eFalls prediction model using routinely collected primary care electronic health records (EHR) to predict risk of emergency department attendance/hospitalisation with fall or fracture within 1 year.MethodsData comprised two independent, retrospective cohorts of adults aged ≥65 years: the population of Wales, from the Secure Anonymised Information Linkage Databank (model development); the population of Bradford and Airedale, England, from Connected Bradford (external validation). Predictors included electronic frailty index components, supplemented with variables informed by literature reviews and clinical expertise. Fall/fracture risk was modelled using multivariable logistic regression with a Least Absolute Shrinkage and Selection Operator penalty. Predictive performance was assessed through calibration, discrimination and clinical utility. Apparent, internal–external cross-validation and external validation performance were assessed across general practices and in clinically relevant subgroups.ResultsThe model’s discrimination performance (c-statistic) was 0.72 (95% confidence interval, CI: 0.68 to 0.76) on internal–external cross-validation and 0.82 (95% CI: 0.80 to 0.83) on external validation. Calibration was variable across practices, with some over-prediction in the validation population (calibration-in-the-large, −0.87; 95% CI: −0.96 to −0.78). Clinical utility on external validation was improved after recalibration.ConclusionThe eFalls prediction model shows good performance and could support proactive stratification for falls prevention services if appropriately embedded into primary care EHR systems

    Clinical prediction models and the multiverse of madness

    Get PDF
    Background Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare. However, most are not reliable for use in clinical practice. Main body We discuss how the creation of a prediction model (e.g. using regression or machine learning methods) is dependent on the sample and size of data used to develop it—were a different sample of the same size used from the same overarching population, the developed model could be very different even when the same model development methods are used. In other words, for each model created, there exists a multiverse of other potential models for that sample size and, crucially, an individual’s predicted value (e.g. estimated risk) may vary greatly across this multiverse. The more an individual’s prediction varies across the multiverse, the greater the instability. We show how small development datasets lead to more different models in the multiverse, often with vastly unstable individual predictions, and explain how this can be exposed by using bootstrapping and presenting instability plots. We recommend healthcare researchers seek to use large model development datasets to reduce instability concerns. This is especially important to ensure reliability across subgroups and improve model fairness in practice. Conclusions Instability is concerning as an individual’s predicted value is used to guide their counselling, resource prioritisation, and clinical decision making. If different samples lead to different models with very different predictions for the same individual, then this should cast doubt into using a particular model for that individual. Therefore, visualising, quantifying and reporting the instability in individual-level predictions is essential when proposing a new model

    Evaluation of clinical prediction models (part 2): how to undertake an external validation study

    Get PDF
    External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness

    Evaluation of clinical prediction models (part 2):how to undertake an external validation study

    Get PDF
    External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness.</p

    External validation of clinical prediction models:simulation-based sample size calculations were more reliable than rules-of-thumb

    Get PDF
    INTRODUCTION: Sample size "rules-of-thumb" for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach.METHODS: Simulation of logistic regression prediction models to investigate factors associated with precision of performance estimates. Then, explanation and illustration of a simulation-based approach to calculate the minimum sample size required to precisely estimate a model's calibration, discrimination and clinical utility.RESULTS: Precision is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration. The simulation-based calculation accounts for the LP distribution and (mis)calibration in the validation sample. Application identifies 2430 required participants (531 events) for external validation of a deep vein thrombosis diagnostic model.CONCLUSION: Where researchers can anticipate the distribution of the model's LP (eg, based on development sample, or a pilot study), a simulation-based approach for calculating sample size for external validation offers more flexibility and reliability than rules-of-thumb.</p

    Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study

    Get PDF
    An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand

    Development and External Validation of Individualized Prediction Models for Pain Intensity Outcomes in Patients With Neck Pain, Low Back Pain, or Both in Primary Care Settings

    Get PDF
    OBJECTIVE: The purpose of this study was to develop and externally validate multivariable prediction models for future pain intensity outcomes to inform targeted interventions for patients with neck or low back pain in primary care settings.METHODS: Model development data were obtained from a group of 679 adults with neck or low back pain who consulted a participating United Kingdom general practice. Predictors included self-report items regarding pain severity and impact from the STarT MSK Tool. Pain intensity at 2 and 6 months was modeled separately for continuous and dichotomized outcomes using linear and logistic regression, respectively. External validation of all models was conducted in a separate group of 586 patients recruited from a similar population with patients' predictor information collected both at point of consultation and 2 to 4 weeks later using self-report questionnaires. Calibration and discrimination of the models were assessed separately using STarT MSK Tool data from both time points to assess differences in predictive performance.RESULTS: Pain intensity and patients reporting their condition would last a long time contributed most to predictions of future pain intensity conditional on other variables. On external validation, models were reasonably well calibrated on average when using tool measurements taken 2 to 4 weeks after consultation (calibration slope = 0.848 [95% CI = 0.767 to 0.928] for 2-month pain intensity score), but performance was poor using point-of-consultation tool data (calibration slope for 2-month pain intensity score of 0.650 [95% CI = 0.549 to 0.750]).CONCLUSION: Model predictive accuracy was good when predictors were measured 2 to 4 weeks after primary care consultation, but poor when measured at the point of consultation. Future research will explore whether additional, nonmodifiable predictors improve point-of-consultation predictive performance.IMPACT: External validation demonstrated that these individualized prediction models were not sufficiently accurate to recommend their use in clinical practice. Further research is required to improve performance through inclusion of additional nonmodifiable risk factors.</p

    Evaluation of clinical prediction models (part 1): from development to external validation

    Get PDF
    Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance

    Evaluation of clinical prediction models (part 1):from development to external validation

    Get PDF
    Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance
    corecore