171 research outputs found

    Workforce diversity in specialist physicians: Implications of findings for religious affiliation in Anaesthesia & Intensive Care

    Get PDF
    BACKGROUND: Minority ethnic identification between physician and patient can reduce communication and access barriers, improve physician-patient relationship, trust, and health outcomes. Religion influences health beliefs, behaviours, treatment decisions, and outcomes. Ethically contentious dilemmas in treatment decisions are often entangled with religious beliefs. They feature more in medical specialties such as Anaesthesia & Intensive Care, with issues including informed consent for surgery, organ donation, transplant, transfusion, and end-of-life decisions. METHODS: We investigate diversity in religious affiliation in the UK medical workforce, using data from the General Medical Council (GMC) specialist register and Health Education England (HEE) trainee applications to medical specialties. We performed conservative Bonferroni corrections for multiple comparisons using Chi-squared tests, as well as normalised mutual-information scores. Robust associations that persisted on all sensitivity analyses are reported, investigating whether ethnicity or foreign primary medical qualification could explain the underlying association. FINDINGS: The only significant and robust association in both GMC and HEE datasets affecting the same religious group and specialty was disproportionately fewer Anaesthesia & Intensive Care physicians with a religious affiliation of “Muslim”, both as consultants (RR 0.57[0.47,0.7]) and trainee applicants (RR 0.27[0.19,0.38]. Associations were not explained by ethnicity or foreign training. We discuss the myriad of implications of the findings for multi-cultural societies. CONCLUSIONS: Lack of physician workforce diversity has far-reaching consequences, especially for specialties such as Anaesthesia and Intensive Care, where ethically contentious decisions could have a big impact. Religious beliefs and practices, or lack thereof, may have unmeasured influences on clinical decisions and on whether patients identify with physicians, which in turn can affect health outcomes. Examining an influencing variable such as religion in healthcare decisions should be prioritised, especially considering findings from the clinician-patient concordance literature. It is important to further explore potential historical and socio-cultural barriers to entry of training medics into under-represented specialties, such as Anaesthesia and Intensive Care

    Validation of a rapid remote digital test for impaired cognition using clinical dementia rating and mini-mental state examination: An observational research study

    Get PDF
    BACKGROUND: The Clinical Dementia Rating (CDR) and Mini-Mental State Examination (MMSE) are useful screening tools for mild cognitive impairment (MCI). However, these tests require qualified in-person supervision and the CDR can take up to 60 min to complete. We developed a digital cognitive screening test (M-CogScore) that can be completed remotely in under 5 min without supervision. We set out to validate M-CogScore in head-to-head comparisons with CDR and MMSE. METHODS: To ascertain the validity of the M-CogScore, we enrolled participants as healthy controls or impaired cognition, matched for age, sex, and education. Participants completed the 30-item paper MMSE Second Edition Standard Version (MMSE-2), paper CDR, and smartphone-based M-CogScore. The digital M-CogScore test is based on time-normalised scores from smartphone-adapted Stroop (M-Stroop), digit-symbols (M-Symbols), and delayed recall tests (M-Memory). We used Spearman's correlation coefficient to determine the convergent validity between M-CogScore and the 30-item MMSE-2, and non-parametric tests to determine its discriminative validity with a CDR label of normal (CDR 0) or impaired cognition (CDR 0.5 or 1). M-CogScore was further compared to MMSE-2 using area under the receiver operating characteristic curves (AUC) with corresponding optimal cut-offs. RESULTS: 72 participants completed all three tests. The M-CogScore correlated with both MMSE-2 (rho = 0.54, p < 0.0001) and impaired cognition on CDR (Mann Whitney U = 187, p < 0.001). M-CogScore achieved an AUC of 0.85 (95% bootstrapped CI [0.80, 0.91]), when differentiating between normal and impaired cognition, compared to an AUC of 0.78 [0.72, 0.84] for MMSE-2 (p = 0.21). CONCLUSION: Digital screening tests such as M-CogScore are desirable to aid in rapid and remote clinical cognitive evaluations. M-CogScore was significantly correlated with established cognitive tests, including CDR and MMSE-2. M-CogScore can be taken remotely without supervision, is automatically scored, has less of a ceiling effect than the MMSE-2, and takes significantly less time to complete

    Simulation of Brain Resection for Cavity Segmentation Using Self-Supervised and Semi-Supervised Learning

    Get PDF
    Resective surgery may be curative for drug-resistant focal epilepsy, but only 40% to 70% of patients achieve seizure freedom after surgery. Retrospective quantitative analysis could elucidate patterns in resected structures and patient outcomes to improve resective surgery. However, the resection cavity must first be segmented on the postoperative MR image. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large amounts of annotated data for training. Annotation of medical images is a time-consuming process requiring highly-trained raters, and often suffering from high inter-rater variability. Self-supervised learning can be used to generate training instances from unlabeled data. We developed an algorithm to simulate resections on preoperative MR images. We curated a new dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images from 431 patients who underwent resective surgery. In addition to EPISURG, we used three public datasets comprising 1813 preoperative MR images for training. We trained a 3D CNN on artificially resected images created on the fly during training, using images from 1) EPISURG, 2) public datasets and 3) both. To evaluate trained models, we calculate Dice score (DSC) between model segmentations and 200 manual annotations performed by three human raters. The model trained on data with manual annotations obtained a median (interquartile range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement between human annotators was 84.0 (9.9). We demonstrate a training method for CNNs using simulated resection cavities that can accurately segment real resection cavities, without manual annotations
    • …
    corecore