247 research outputs found

    Simulation-based power calculations for planning a two-stage individual participant data meta-analysis

    Get PDF
    BACKGROUND Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. METHODS The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. RESULTS In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. CONCLUSIONS Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely

    Minimum sample size for external validation of a clinical prediction model with a continuous outcome.

    Get PDF
    Clinical prediction models provide individualized outcome predictions to inform patient counseling and clinical decision making. External validation is the process of examining a prediction model's performance in data independent to that used for model development. Current external validation studies often suffer from small sample sizes, and subsequently imprecise estimates of a model's predictive performance. To address this, we propose how to determine the minimum sample size needed for external validation of a clinical prediction model with a continuous outcome. Four criteria are proposed, that target precise estimates of (i) R2 (the proportion of variance explained), (ii) calibration-in-the-large (agreement between predicted and observed outcome values on average), (iii) calibration slope (agreement between predicted and observed values across the range of predicted values), and (iv) the variance of observed outcome values. Closed-form sample size solutions are derived for each criterion, which require the user to specify anticipated values of the model's performance (in particular R2 ) and the outcome variance in the external validation dataset. A sensible starting point is to base values on those for the model development study, as obtained from the publication or study authors. The largest sample size required to meet all four criteria is the recommended minimum sample size needed in the external validation dataset. The calculations can also be applied to estimate expected precision when an existing dataset with a fixed sample size is available, to help gauge if it is adequate. We illustrate the proposed methods on a case-study predicting fat-free mass in children

    Calculating the sample size required for developing a clinical prediction model.

    Get PDF
    Clinical prediction models aim to predict outcomes in individuals, to inform diagnosis or prognosis in healthcare. Hundreds of prediction models are published in the medical literature each year, yet many are developed using a dataset that is too small for the total number of participants or outcome events. This leads to inaccurate predictions and consequently incorrect healthcare decisions for some individuals. In this article, the authors provide guidance on how to calculate the sample size required to develop a clinical prediction model

    Resisting and conforming to the ‘lesbian look’ : the importance of appearance norms for lesbian and bisexual women

    Get PDF
    Appearance is one way in which lesbian and bisexual identities and affiliation to lesbian, gay, bisexual (LGB) subculture can be demonstrated. ‘Butch’ and ‘androgynous’ styles have been used by lesbian women to communicate a non-heterosexual identity. However, some LGB appearance researchers have argued that there has been a mainstreaming and diversification of lesbian style in the last couple of decades, which has resulted in less distinction between lesbian and straight looks. This research draws on the Social Identity approach to explore contemporary style in lesbian and bisexual communities. Fifteen lesbian and bisexual women took part in semi-structured interviews which were analysed using thematic analysis. Although some participants reported a diversification of lesbian style, most used the term ‘butch’ to describe lesbian style, and a ‘boyish’ look was viewed as the most common contemporary lesbian style. By contrast, most participants could not identify distinct bisexual appearance norms. The data provide evidence of conflicting desires (and expectations) to visibly project social identity by conforming to specific lesbian styles, and to be an authentic, unique individual by resisting these subcultural styles

    Predictors of outcome in sciatica patients following an epidural steroid injection:the POiSE prospective observational cohort study protocol

    Get PDF
    INTRODUCTION: Sciatica can be very painful and, in most cases, is due to pressure on a spinal nerve root from a disc herniation with associated inflammation. For some patients, the pain persists, and one management option is a spinal epidural steroid injection (ESI). The aim of an ESI is to relieve leg pain, improve function and reduce the need for surgery. ESIs work well in some patients but not in others, but we cannot identify these patient subgroups currently. This study aims to identify factors, including patient characteristics, clinical examination and imaging findings, that help in predicting who does well and who does not after an ESI. The overall objective is to develop a prognostic model to support individualised patient and clinical decision-making regarding ESI. METHODS: POiSE is a prospective cohort study of 439 patients with sciatica referred by their clinician for an ESI. Participants will receive weekly text messages until 12 weeks following their ESIand then again at 24 weeks following their ESI to collect data on leg pain severity. Questionnaires will be sent to participants at baseline, 6, 12 and 24 weeks after their ESI to collect data on pain, disability, recovery and additional interventions. The prognosis for the cohort will be described. The primary outcome measure for the prognostic model is leg pain at 6 weeks. Prognostic models will also be developed for secondary outcomes of disability and recovery at 6 weeks and additional interventions at 24 weeks following ESI. Statistical analyses will include multivariable linear and logistic regression with mixed effects model. ETHICS AND DISSEMINATION: The POiSE study has received ethical approval (South Central Berkshire B Research Ethics Committee 21/SC/0257). Dissemination will be guided by our patient and public engagement group and will include scientific publications, conference presentations and social media.</p

    Guide to presenting clinical prediction models for use in clinical settings

    Get PDF
    For permission to use (where not already granted under a licence) please go to. Clinical prediction models estimate the risk of existing disease or future outcome for an individual, which is conditional on the values of multiple predictors such as age, sex, and biomarkers. In this article, Bonnett and colleagues provide a guide to presenting clinical prediction models so that they can be implemented in practice, if appropriate. They describe how to create four presentation formats and discuss the advantages and disadvantages of each format. A key message is the need for stakeholder engagement to determine the best presentation option in relation to the clinical context of use and the intended users

    Evaluation of clinical prediction models (part 2):how to undertake an external validation study

    Get PDF
    External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness.</p

    External validation of clinical prediction models:simulation-based sample size calculations were more reliable than rules-of-thumb

    Get PDF
    INTRODUCTION: Sample size "rules-of-thumb" for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach.METHODS: Simulation of logistic regression prediction models to investigate factors associated with precision of performance estimates. Then, explanation and illustration of a simulation-based approach to calculate the minimum sample size required to precisely estimate a model's calibration, discrimination and clinical utility.RESULTS: Precision is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration. The simulation-based calculation accounts for the LP distribution and (mis)calibration in the validation sample. Application identifies 2430 required participants (531 events) for external validation of a deep vein thrombosis diagnostic model.CONCLUSION: Where researchers can anticipate the distribution of the model's LP (eg, based on development sample, or a pilot study), a simulation-based approach for calculating sample size for external validation offers more flexibility and reliability than rules-of-thumb.</p
    corecore