667 research outputs found

    Reliability of Hallux Rigidus Radiographic Grading System

    Get PDF
    Introduction. The purpose of this study was to determine the inter- and intra-observer reliability of a clinical radiographic scale for hallux rigidus. Methods. A total of 80 patients were retrospectively selected from the patient population of two foot and ankle orthopaedic surgeons. Each corresponding series of radiographic images (weight-bearing anteroposterior, weight-bearing lateral, and oblique of the foot) was randomized and evaluated. Re-randomization was performed and the corresponding radiograph images re-numbered. Four orthopaedic foot and ankle surgeons graded each patient, and each rater reclassified the re-randomized radiographic images three weeks later. Results. Sixty-one out of 80 patients (76%) were included in this study. For intra-observer reliability, most of the raters showed “excellent” agreement except one rater had a “substantial” agreement. For inter-observer reliability, only 14 out of 61 cases (23%) showed total agreement between the eight readings from the four surgeons, and 11 out of the 14 cases (79%) were grade 3 hallux rigidus. One of the raters had a tendency to grade at a higher grade resulting in poorer agreement. If this rater was excluded, the results demonstrated a “substantial” agreement by using this classification. Conclusion. The hallux rigidus radiographic grading system should be used with caution. Although there is an “excellent” level of intra-observer agreement, there is only “moderate” to “substantial” level of inter-observer reliability

    Eastern Europe’s “Transitional Industry”? : Deconstructing the Early Streletskian

    Get PDF
    Acknowledgements We are very grateful to many friends and colleagues for discussions and various help, including Yuri Demindenko, Evgeny Giria, Brad Gravina, Anton Lada, Sergei Lisitsyn and Alexander Otcherednoy. Needless to say, they may or may not agree with our conclusions. We are also thankful to Jesse Davies and Craig Williams for the help with the illustrations and figures. Ekaterina Petrova kindly helped with ID’ing some of the sampled bones. We thank the staff of the Oxford Radiocarbon Accelerator Unit at the University of Oxford for their support with the chemical preparation and the measurement of the samples. We are also grateful to the three anonymous reviewers for their thoughtful and constructive comments, which helped improve the paper. This paper is a contribution to Leverhulme Trust project RPG-2012-800. The research leading to some of our radiocarbon results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013); ERC grant 324139 “PalaeoChron” awarded to Professor Tom Higham. AB and AS acknowledge Russian Science Foundation grant number 20-78-10151 and Russian Foundation of Basic Research grant numbers 18-39-20009 and 20-09-00233 for support of their work. We also acknowledge the participation of IHMC RAS (state assignment 0184-2019-0001) and ZIN RAS (state assignment АААА-А19-119032590102-7).Peer reviewedPublisher PD

    Clinical prediction models and the multiverse of madness

    Get PDF
    Background Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare. However, most are not reliable for use in clinical practice. Main body We discuss how the creation of a prediction model (e.g. using regression or machine learning methods) is dependent on the sample and size of data used to develop it—were a different sample of the same size used from the same overarching population, the developed model could be very different even when the same model development methods are used. In other words, for each model created, there exists a multiverse of other potential models for that sample size and, crucially, an individual’s predicted value (e.g. estimated risk) may vary greatly across this multiverse. The more an individual’s prediction varies across the multiverse, the greater the instability. We show how small development datasets lead to more different models in the multiverse, often with vastly unstable individual predictions, and explain how this can be exposed by using bootstrapping and presenting instability plots. We recommend healthcare researchers seek to use large model development datasets to reduce instability concerns. This is especially important to ensure reliability across subgroups and improve model fairness in practice. Conclusions Instability is concerning as an individual’s predicted value is used to guide their counselling, resource prioritisation, and clinical decision making. If different samples lead to different models with very different predictions for the same individual, then this should cast doubt into using a particular model for that individual. Therefore, visualising, quantifying and reporting the instability in individual-level predictions is essential when proposing a new model

    Cohort Multiple Randomised Controlled Trials (cmRCT) design: efficient but biased? A simulation study to evaluate the feasibility of the Cluster cmRCT design

    Get PDF
    Background The Cohort Multiple Randomised Controlled Trial (cmRCT) is a newly proposed pragmatic trial design; recently several cmRCT have been initiated. This study tests the unresolved question of whether differential refusal in the intervention arm leads to bias or loss of statistical power and how to deal with this. Methods We conduct simulations evaluating a hypothetical cluster cmRCT in patients at risk of cardiovascular disease (CVD). To deal with refusal, we compare the analysis methods intention to treat (ITT), per protocol (PP) and two instrumental variable (IV) methods: two stage predictor substitution (2SPS) and two stage residual inclusion (2SRI) with respect to their bias and power. We vary the correlation between treatment refusal probability and the probability of experiencing the outcome to create different scenarios. Results We found ITT to be biased in all scenarios, PP the most biased when correlation is strong and 2SRI the least biased on average. Trials suffer a drop in power unless the refusal rate is factored into the power calculation. Conclusions The ITT effect in routine practice is likely to lie somewhere between the ITT and IV estimates from the trial which differ significantly depending on refusal rates. More research is needed on how refusal rates of experimental interventions correlate with refusal rates in routine practice to help answer the question of which analysis more relevant. We also recommend updating the required sample size during the trial as more information about the refusal rate is gained

    Minimum sample size for developing a multivariable prediction model using multinomial logistic regression

    Get PDF
    Aims Multinomial logistic regression models allow one to predict the risk of a categorical outcome with > 2 categories. When developing such a model, researchers should ensure the number of participants (n)) is appropriate relative to the number of events (Ek)) and the number of predictor parameters (pk) for each category k. We propose three criteria to determine the minimum n required in light of existing criteria developed for binary outcomes. Proposed criteria The first criterion aims to minimise the model overfitting. The second aims to minimise the difference between the observed and adjusted R2 Nagelkerke. The third criterion aims to ensure the overall risk is estimated precisely. For criterion (i), we show the sample size must be based on the anticipated Cox-snell R2 of distinct ‘one-to-one’ logistic regression models corresponding to the sub-models of the multinomial logistic regression, rather than on the overall Cox-snell R2 of the multinomial logistic regression. Evaluation of criteria We tested the performance of the proposed criteria (i) through a simulation study and found that it resulted in the desired level of overfitting. Criterion (ii) and (iii) were natural extensions from previously proposed criteria for binary outcomes and did not require evaluation through simulation. Summary We illustrated how to implement the sample size criteria through a worked example considering the development of a multinomial risk prediction model for tumour type when presented with an ovarian mass. Code is provided for the simulation and worked example. We will embed our proposed criteria within the pmsampsize R library and Stata modules

    The Strayed Reveller, No. 2

    Get PDF
    The second issue of The Strayed Reveller.https://scholarworks.sfasu.edu/reveller/1001/thumbnail.jp

    Impact of sample size on the stability of risk scores from clinical prediction models: a case study in cardiovascular disease

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2020-02-25, accepted 2020-08-12, registration 2020-08-13, pub-electronic 2020-09-09, online 2020-09-09, collection 2020-12Publication status: PublishedFunder: Medical Research Council; doi: http://dx.doi.org/10.13039/501100000265; Grant(s): MR/N013751/1Abstract: Background: Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in the national guidelines, and those based on recently published sample size formula for prediction models. Methods: We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100,000, 50,000, 10,000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5–95th percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population-derived risk) to summarise results. Results: For a sample size of 100,000, the median 5–95th percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population-derived risks of 4–5%, 9–10%, 14–15% and 19–20% respectively; for N = 10,000, it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula-derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained. Conclusions: Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting, there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models
    corecore