271 research outputs found
Simulation-based power calculations for planning a two-stage individual participant data meta-analysis
BACKGROUND
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions.
METHODS
The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value.
RESULTS
In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials.
CONCLUSIONS
Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely
Minimum sample size for external validation of a clinical prediction model with a continuous outcome.
Clinical prediction models provide individualized outcome predictions to inform patient counseling and clinical decision making. External validation is the process of examining a prediction model's performance in data independent to that used for model development. Current external validation studies often suffer from small sample sizes, and subsequently imprecise estimates of a model's predictive performance. To address this, we propose how to determine the minimum sample size needed for external validation of a clinical prediction model with a continuous outcome. Four criteria are proposed, that target precise estimates of (i) R2 (the proportion of variance explained), (ii) calibration-in-the-large (agreement between predicted and observed outcome values on average), (iii) calibration slope (agreement between predicted and observed values across the range of predicted values), and (iv) the variance of observed outcome values. Closed-form sample size solutions are derived for each criterion, which require the user to specify anticipated values of the model's performance (in particular R2 ) and the outcome variance in the external validation dataset. A sensible starting point is to base values on those for the model development study, as obtained from the publication or study authors. The largest sample size required to meet all four criteria is the recommended minimum sample size needed in the external validation dataset. The calculations can also be applied to estimate expected precision when an existing dataset with a fixed sample size is available, to help gauge if it is adequate. We illustrate the proposed methods on a case-study predicting fat-free mass in children
Calculating the sample size required for developing a clinical prediction model.
Clinical prediction models aim to predict outcomes in individuals, to inform diagnosis or prognosis in healthcare. Hundreds of prediction models are published in the medical literature each year, yet many are developed using a dataset that is too small for the total number of participants or outcome events. This leads to inaccurate predictions and consequently incorrect healthcare decisions for some individuals. In this article, the authors provide guidance on how to calculate the sample size required to develop a clinical prediction model
Resisting and conforming to the ‘lesbian look’ : the importance of appearance norms for lesbian and bisexual women
Appearance is one way in which lesbian and bisexual identities and affiliation to lesbian, gay, bisexual (LGB) subculture can be demonstrated. ‘Butch’ and ‘androgynous’ styles have been used by lesbian women to communicate a non-heterosexual identity. However, some LGB appearance researchers have argued that there has been a mainstreaming and diversification of lesbian style in the last couple of decades, which has resulted in less distinction between lesbian and straight looks. This research draws on the Social Identity approach to explore contemporary style in lesbian and bisexual communities. Fifteen lesbian and bisexual women took part in semi-structured interviews which were analysed using thematic analysis. Although some participants reported a diversification of lesbian style, most used the term ‘butch’ to describe lesbian style, and a ‘boyish’ look was viewed as the most common contemporary lesbian style. By contrast, most participants could not identify distinct bisexual appearance norms. The data provide evidence of conflicting desires (and expectations) to visibly project social identity by conforming to specific lesbian styles, and to be an authentic, unique individual by resisting these subcultural styles
Predictors of outcome in sciatica patients following an epidural steroid injection:the POiSE prospective observational cohort study protocol
INTRODUCTION: Sciatica can be very painful and, in most cases, is due to pressure on a spinal nerve root from a disc herniation with associated inflammation. For some patients, the pain persists, and one management option is a spinal epidural steroid injection (ESI). The aim of an ESI is to relieve leg pain, improve function and reduce the need for surgery. ESIs work well in some patients but not in others, but we cannot identify these patient subgroups currently. This study aims to identify factors, including patient characteristics, clinical examination and imaging findings, that help in predicting who does well and who does not after an ESI. The overall objective is to develop a prognostic model to support individualised patient and clinical decision-making regarding ESI. METHODS: POiSE is a prospective cohort study of 439 patients with sciatica referred by their clinician for an ESI. Participants will receive weekly text messages until 12 weeks following their ESIand then again at 24 weeks following their ESI to collect data on leg pain severity. Questionnaires will be sent to participants at baseline, 6, 12 and 24 weeks after their ESI to collect data on pain, disability, recovery and additional interventions. The prognosis for the cohort will be described. The primary outcome measure for the prognostic model is leg pain at 6 weeks. Prognostic models will also be developed for secondary outcomes of disability and recovery at 6 weeks and additional interventions at 24 weeks following ESI. Statistical analyses will include multivariable linear and logistic regression with mixed effects model. ETHICS AND DISSEMINATION: The POiSE study has received ethical approval (South Central Berkshire B Research Ethics Committee 21/SC/0257). Dissemination will be guided by our patient and public engagement group and will include scientific publications, conference presentations and social media.</p
Evaluation of clinical prediction models (part 2): how to undertake an external validation study
External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness
Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist
The increasing availability of large combined datasets (or big data), such as those from electronic health records and from individual participant data meta-analyses, provides new opportunities and challenges for researchers developing and validating (including updating) prediction models. These datasets typically include individuals from multiple clusters (such as multiple centres, geographical locations, or different studies). Accounting for clustering is important to avoid misleading conclusions and enables researchers to explore heterogeneity in prediction model performance across multiple centres, regions, or countries, to better tailor or match them to these different clusters, and thus to develop prediction models that are more generalisable. However, this requires prediction model researchers to adopt more specific design, analysis, and reporting methods than standard prediction model studies that do not have any inherent substantial clustering. Therefore, prediction model studies based on clustered data need to be reported differently so that readers can appraise the study methods and findings, further increasing the use and implementation of such prediction models developed or validated from clustered datasets
Transparent reporting of multivariable prediction models developed or validated using clustered data (TRIPOD-Cluster): explanation and elaboration
The TRIPOD-Cluster (transparent reporting of multivariable prediction models developed or validated using clustered data) statement comprises a 19 item checklist, which aims to improve the reporting of studies developing or validating a prediction model in clustered data, such as individual participant data meta-analyses (clustering by study) and electronic health records (clustering by practice or hospital). This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD-Cluster statement is explained in detail and accompanied by published examples of good reporting. The document also serves as a reference of factors to consider when designing, conducting, and analysing prediction model development or validation studies in clustered data. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, authors are recommended to include a completed checklist in their submission
Guide to presenting clinical prediction models for use in clinical settings
For permission to use (where not already granted under a licence) please go to. Clinical prediction models estimate the risk of existing disease or future outcome for an individual, which is conditional on the values of multiple predictors such as age, sex, and biomarkers. In this article, Bonnett and colleagues provide a guide to presenting clinical prediction models so that they can be implemented in practice, if appropriate. They describe how to create four presentation formats and discuss the advantages and disadvantages of each format. A key message is the need for stakeholder engagement to determine the best presentation option in relation to the clinical context of use and the intended users
- …