3,357 research outputs found

    Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques.</p> <p>Methods</p> <p>In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques.</p> <p>Results</p> <p>Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve.</p> <p>Conclusion</p> <p>Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.</p

    Net benefit approaches to the evaluation of prediction models, molecular markers, and diagnostic tests

    Get PDF
    Many decisions in medicine involve trade-offs, such as between diagnosing patients with disease versus unnecessary additional testing for those who are healthy. Net benefit is an increasingly reported decision analytic measure that puts benefits and harms on the same scale. This is achieved by specifying an exchange rate, a clinical judgment of the relative value of benefits (such as detecting a cancer) and harms (such as unnecessary biopsy) associated with models, markers, and tests. The exchange rate can be derived by asking simple questions, such as the maximum number of patients a doctor would recommend for biopsy to find one cancer. As the answers to these sorts of questions are subjective, it is possible to plot net benefit for a range of reasonable exchange rates in a "decision curve." For clinical prediction models, the exchange rate is related to the probability threshold to determine whether a patient is classified as being positive or negative for a disease. Net benefit is useful for determining whether basing clinical decisions on a model, marker, or test would do more good than harm. This is in contrast to traditional measures such as sensitivity, specificity, or area under the curve, which are statistical abstractions not directly informative about clinical value. Recent years have seen an increase in practical applications of net benefit analysis to research data. This is a welcome development, since decision analytic techniques are of particular value when the purpose of a model, marker, or test is to help doctors make better clinical decisions

    Chapter 12: Systematic Review of Prognostic Tests

    Get PDF
    A number of new biological markers are being studied as predictors of disease or adverse medical events among those who already have a disease. Systematic reviews of this growing literature can help determine whether the available evidence supports use of a new biomarker as a prognostic test that can more accurately place patients into different prognostic groups to improve treatment decisions and the accuracy of outcome predictions. Exemplary reviews of prognostic tests are not widely available, and the methods used to review diagnostic tests do not necessarily address the most important questions about prognostic tests that are used to predict the time-dependent likelihood of future patient outcomes. We provide suggestions for those interested in conducting systematic reviews of a prognostic test. The proposed use of the prognostic test should serve as the framework for a systematic review and to help define the key questions. The outcome probabilities or level of risk and other characteristics of prognostic groups are the most salient statistics for review and perhaps meta-analysis. Reclassification tables can help determine how a prognostic test affects the classification of patients into different prognostic groups, hence their treatment. Review of studies of the association between a potential prognostic test and patient outcomes would have little impact other than to determine whether further development as a prognostic test might be warranted

    External validation, update and development of prediction models for pre-eclampsia using an Individual Participant Data (IPD) meta-analysis: the International Prediction of Pregnancy Complication Network (IPPIC pre-eclampsia) protocol.

    Get PDF
    Background: Pre-eclampsia, a condition with raised blood pressure and proteinuria is associated with an increased risk of maternal and offspring mortality and morbidity. Early identification of mothers at risk is needed to target management. Methods/design: We aim to systematically review the existing literature to identify prediction models for pre-eclampsia. We have established the International Prediction of Pregnancy Complication Network (IPPIC), made up of 72 researchers from 21 countries who have carried out relevant primary studies or have access to existing registry databases, and collectively possess data from more than two million patients. We will use the individual participant data (IPD) from these studies to externally validate these existing prediction models and summarise model performance across studies using random-effects meta-analysis for any, late (after 34 weeks) and early (before 34 weeks) onset pre-eclampsia. If none of the models perform well, we will recalibrate (update), or develop and validate new prediction models using the IPD. We will assess the differential accuracy of the models in various settings and subgroups according to the risk status. We will also validate or develop prediction models based on clinical characteristics only; clinical and biochemical markers; clinical and ultrasound parameters; and clinical, biochemical and ultrasound tests. Discussion: Numerous systematic reviews with aggregate data meta-analysis have evaluated various risk factors separately or in combination for predicting pre-eclampsia, but these are affected by many limitations. Our large-scale collaborative IPD approach encourages consensus towards well developed, and validated prognostic models, rather than a number of competing non-validated ones. The large sample size from our IPD will also allow development and validation of multivariable prediction model for the relatively rare outcome of early onset pre-eclampsia. Trial registration: The project was registered on Prospero on the 27 November 2015 with ID: CRD42015029349

    Evaluating classification accuracy for modern learning approaches

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149333/1/sim8103_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149333/2/sim8103.pd

    Testing for improvement in prediction model performance

    Get PDF
    New methodology has been proposed in recent years for evaluating the improvement in prediction performance gained by adding a new predictor, Y, to a risk model containing a set of baseline predictors, X, for a binary outcome D. We prove theoretically that null hypotheses concerning no improvement in performance are equivalent to the simple null hypothesis that the coefficient for Y is zero in the risk model, P(D = 1|X, Y ). Therefore, testing for improvement in prediction performance is redundant if Y has already been shown to be a risk factor. We investigate properties of tests through simulation studies, focusing on the change in the area under the ROC curve (AUC). An unexpected finding is that standard testing procedures that do not adjust for variability in estimated regression coefficients are extremely conservative. This may explain why the AUC is widely considered insensitive to improvements in prediction performance and suggests that the problem of insensitivity has to do with use of invalid procedures for inference rather than with the measure itself. To avoid redundant testing and use of potentially problematic methods for inference, we recommend that hypothesis testing for no improvement be limited to evaluation of Y as a risk factor, for which methods are well developed and widely available. Analyses of measures of prediction performance should focus on estimation rather than on testing

    Using the weighted area under the net benefit curve for decision curve analysis

    Get PDF
    Supplementary Material.docx. Includes the Appendix and two supplementary figures referred in the manuscript. (DOCX 327 kb
    corecore