21 research outputs found

    Development and external validation of a clinical prediction model for predicting quality of recovery up to 1 week after surgery

    Get PDF
    The Quality of Recovery Score-40 (QoR-40) has been increasingly used for assessing recovery after patients undergoing surgery. However, a prediction model estimating quality of recovery is lacking. The aim of the present study was to develop and externally validate a clinical prediction model that predicts quality of recovery up to one week after surgery. The modelling procedure consisted of two models of increasing complexity (basic and full model). To assess the internal validity of the developed model, bootstrapping (1000 times) was applied. At external validation, the model performance was evaluated according to measures for overall model performance (explained variance (R 2)) and calibration (calibration plot and slope). The full model consisted of age, sex, previous surgery, BMI, ASA classification, duration of surgery, HADS and preoperative QoR-40 score. At model development, the R 2 of the full model was 0.24. At external validation the R 2 dropped as expected. The calibration analysis showed that the QoR-40 predictions provided by the developed prediction models are reliable. The presented models can be used as a starting point for future updating in prediction studies. When the predictive performance is improved it could be implemented clinically in the future.</p

    Is My Clinical Prediction Model Clinically Useful?: A Primer on Decision Curve Analysis

    No full text
    Decision curve analysis is an increasingly popular method to assess the impact of a prediction model on medical decision making. The analysis provides a graphical summary. A basic understanding of a decision curve is needed to interpret these graphics. This short introduction addresses the common features of a decision curve. Furthermore, using a glioblastoma patient set provided by the Machine Intelligence in Clinical Neuroscience Lab from the Department of Neurosurgery and Clinical Neuroscience Center, University Hospital Zurich a decision curve is plotted for two prediction models. The corresponding R code is provided

    Updating Clinical Prediction Models: An Illustrative Case Study

    No full text
    The performance of clinical prediction models tends to deteriorate over time. Researchers often develop a new prediction if an existing model performs poorly at external validation. Model updating is an efficient technique and promising alternative to the de novo development of clinical prediction models. Model updating has been recommended by the TRIPOD guidelines. To illustrate several model updating techniques, a case study is provided for the development and updating of a clinical prediction model assessing postoperative anxiety in data coming from two double-blinded placebo-controlled randomized controlled trials with a very similar methodological framework. Note that the developed model and updated model are for didactic purposes only. This paper discusses some common considerations and caveats for researchers to be aware of when planning or applying updating of a prediction model

    Application of clinical prediction modeling in pediatric neurosurgery: a case study

    Get PDF
    There has been an increasing interest in articles reporting on clinical prediction models in pediatric neurosurgery. Clinical prediction models are mathematical equations that combine patient-related risk factors for the estimation of an individual’s risk of an outcome. If used sensibly, these evidence-based tools may help pediatric neurosurgeons in medical decision-making processes. Furthermore, they may help to communicate anticipated future events of diseases to children and their parents and facilitate shared decision-making accordingly. A basic understanding of this methodology is incumbent when developing or applying a prediction model. This paper addresses this methodology tailored to pediatric neurosurgery. For illustration, we use original pediatric data from our institution to illustrate this methodology with a case study. The developed model is however not externally validated, and clinical impact has not been assessed; therefore, the model cannot be recommended for clinical use in its current form

    Prognostic potentialities of the final predictor variables distinguished by outcome variables in a day-case surgery population.

    No full text
    <p>Prognostic potentialities of the final predictor variables distinguished by outcome variables in a day-case surgery population.</p

    Timeline of the study.

    No full text
    <p>T0 = baseline assessment on the day of surgery (self-reported questionnaire); T1 = seventh postoperative day (self-reported questionnaire); STAI, State-Trait Anxiety Inventory; MFI, Multidimensional Fatigue Inventory; HADS, Hospital Anxiety and Depression Scale, STAS: State-Trait Anger Scale.</p
    corecore