44 research outputs found

    The Significance of Site of Cut in Self-Harm in Young People

    Get PDF
    Background: Self-cutting in young people is associated with high risk of repetition and suicide. It is important, therefore, to identify characteristics of self-cutting that might impact on repetition and aspects of care by staff. This study aimed to explore differences in clinical (e.g., previous self-harm) and psychological characteristics (intent, mental state, precipitants) of self-cutting in young people based on whether site of cut was visible or concealed. Methods: Data were from a large prospective self-harm monitoring database that collected data on hospital emergency department presentations for self-harm in the City of Manchester, UK, between 2005 and 2011. Clinical and psychological characteristics, as well as onward referral/clinical management from the emergency department, of 799 young people (totalling 1,196 episodes) age 15-24 who self-cut in visible or concealed areas were compared using logistic regression. Results: During the study period 500 (40%) episodes were in a concealed location. Concealed self-cutting was more likely to be precipitated by specific self-reported precipitants such as abuse and characterised by the following: previous self-harm, current psychiatric treatment, premeditation, and greater risk of repetition within the study period. Receiving a psychosocial assessment and referral to psychiatric services from the emergency department were less likely, however. Repetition and referral to psychiatric treatment were not significantly associated with site of injury when adjusting for other factors. Conclusions: There are meaningful differences in characteristics associated with location of cut. We recommend that all young people who present to hospital following self-harm receive a psychosocial assessment, in line with NICE guidance

    Prevalence of Frailty in European Emergency Departments (FEED): an international flash mob study

    Get PDF

    The Analysis of Teaching of Medical Schools (AToMS) survey: an analysis of 47,258 timetabled teaching events in 25 UK medical schools relating to timing, duration, teaching formats, teaching content, and problem-based learning.

    Get PDF
    BACKGROUND: What subjects UK medical schools teach, what ways they teach subjects, and how much they teach those subjects is unclear. Whether teaching differences matter is a separate, important question. This study provides a detailed picture of timetabled undergraduate teaching activity at 25 UK medical schools, particularly in relation to problem-based learning (PBL). METHOD: The Analysis of Teaching of Medical Schools (AToMS) survey used detailed timetables provided by 25 schools with standard 5-year courses. Timetabled teaching events were coded in terms of course year, duration, teaching format, and teaching content. Ten schools used PBL. Teaching times from timetables were validated against two other studies that had assessed GP teaching and lecture, seminar, and tutorial times. RESULTS: A total of 47,258 timetabled teaching events in the academic year 2014/2015 were analysed, including SSCs (student-selected components) and elective studies. A typical UK medical student receives 3960 timetabled hours of teaching during their 5-year course. There was a clear difference between the initial 2 years which mostly contained basic medical science content and the later 3 years which mostly consisted of clinical teaching, although some clinical teaching occurs in the first 2 years. Medical schools differed in duration, format, and content of teaching. Two main factors underlay most of the variation between schools, Traditional vs PBL teaching and Structured vs Unstructured teaching. A curriculum map comparing medical schools was constructed using those factors. PBL schools differed on a number of measures, having more PBL teaching time, fewer lectures, more GP teaching, less surgery, less formal teaching of basic science, and more sessions with unspecified content. DISCUSSION: UK medical schools differ in both format and content of teaching. PBL and non-PBL schools clearly differ, albeit with substantial variation within groups, and overlap in the middle. The important question of whether differences in teaching matter in terms of outcomes is analysed in a companion study (MedDifs) which examines how teaching differences relate to university infrastructure, entry requirements, student perceptions, and outcomes in Foundation Programme and postgraduate training

    Exploring UK medical school differences: the MedDifs study of selection, teaching, student and F1 perceptions, postgraduate outcomes and fitness to practise.

    Get PDF
    BACKGROUND: Medical schools differ, particularly in their teaching, but it is unclear whether such differences matter, although influential claims are often made. The Medical School Differences (MedDifs) study brings together a wide range of measures of UK medical schools, including postgraduate performance, fitness to practise issues, specialty choice, preparedness, satisfaction, teaching styles, entry criteria and institutional factors. METHOD: Aggregated data were collected for 50 measures across 29 UK medical schools. Data include institutional history (e.g. rate of production of hospital and GP specialists in the past), curricular influences (e.g. PBL schools, spend per student, staff-student ratio), selection measures (e.g. entry grades), teaching and assessment (e.g. traditional vs PBL, specialty teaching, self-regulated learning), student satisfaction, Foundation selection scores, Foundation satisfaction, postgraduate examination performance and fitness to practise (postgraduate progression, GMC sanctions). Six specialties (General Practice, Psychiatry, Anaesthetics, Obstetrics and Gynaecology, Internal Medicine, Surgery) were examined in more detail. RESULTS: Medical school differences are stable across time (median alpha = 0.835). The 50 measures were highly correlated, 395 (32.2%) of 1225 correlations being significant with p < 0.05, and 201 (16.4%) reached a Tukey-adjusted criterion of p < 0.0025. Problem-based learning (PBL) schools differ on many measures, including lower performance on postgraduate assessments. While these are in part explained by lower entry grades, a surprising finding is that schools such as PBL schools which reported greater student satisfaction with feedback also showed lower performance at postgraduate examinations. More medical school teaching of psychiatry, surgery and anaesthetics did not result in more specialist trainees. Schools that taught more general practice did have more graduates entering GP training, but those graduates performed less well in MRCGP examinations, the negative correlation resulting from numbers of GP trainees and exam outcomes being affected both by non-traditional teaching and by greater historical production of GPs. Postgraduate exam outcomes were also higher in schools with more self-regulated learning, but lower in larger medical schools. A path model for 29 measures found a complex causal nexus, most measures causing or being caused by other measures. Postgraduate exam performance was influenced by earlier attainment, at entry to Foundation and entry to medical school (the so-called academic backbone), and by self-regulated learning. Foundation measures of satisfaction, including preparedness, had no subsequent influence on outcomes. Fitness to practise issues were more frequent in schools producing more male graduates and more GPs. CONCLUSIONS: Medical schools differ in large numbers of ways that are causally interconnected. Differences between schools in postgraduate examination performance, training problems and GMC sanctions have important implications for the quality of patient care and patient safety

    The bii4africa dataset of faunal and floral population intactness estimates across Africa’s major land uses

    Get PDF
    Sub-Saharan Africa is under-represented in global biodiversity datasets, particularly regarding the impact of land use on species’ population abundances. Drawing on recent advances in expert elicitation to ensure data consistency, 200 experts were convened using a modified-Delphi process to estimate ‘intactness scores’: the remaining proportion of an ‘intact’ reference population of a species group in a particular land use, on a scale from 0 (no remaining individuals) to 1 (same abundance as the reference) and, in rare cases, to 2 (populations that thrive in human-modified landscapes). The resulting bii4africa dataset contains intactness scores representing terrestrial vertebrates (tetrapods: ±5,400 amphibians, reptiles, birds, mammals) and vascular plants (±45,000 forbs, graminoids, trees, shrubs) in sub-Saharan Africa across the region’s major land uses (urban, cropland, rangeland, plantation, protected, etc.) and intensities (e.g., large-scale vs smallholder cropland). This dataset was co-produced as part of the Biodiversity Intactness Index for Africa Project. Additional uses include assessing ecosystem condition; rectifying geographic/taxonomic biases in global biodiversity indicators and maps; and informing the Red List of Ecosystems

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Adding 6 months of androgen deprivation therapy to postoperative radiotherapy for prostate cancer: a comparison of short-course versus no androgen deprivation therapy in the RADICALS-HD randomised controlled trial

    Get PDF
    Background Previous evidence indicates that adjuvant, short-course androgen deprivation therapy (ADT) improves metastasis-free survival when given with primary radiotherapy for intermediate-risk and high-risk localised prostate cancer. However, the value of ADT with postoperative radiotherapy after radical prostatectomy is unclear. Methods RADICALS-HD was an international randomised controlled trial to test the efficacy of ADT used in combination with postoperative radiotherapy for prostate cancer. Key eligibility criteria were indication for radiotherapy after radical prostatectomy for prostate cancer, prostate-specific antigen less than 5 ng/mL, absence of metastatic disease, and written consent. Participants were randomly assigned (1:1) to radiotherapy alone (no ADT) or radiotherapy with 6 months of ADT (short-course ADT), using monthly subcutaneous gonadotropin-releasing hormone analogue injections, daily oral bicalutamide monotherapy 150 mg, or monthly subcutaneous degarelix. Randomisation was done centrally through minimisation with a random element, stratified by Gleason score, positive margins, radiotherapy timing, planned radiotherapy schedule, and planned type of ADT, in a computerised system. The allocated treatment was not masked. The primary outcome measure was metastasis-free survival, defined as distant metastasis arising from prostate cancer or death from any cause. Standard survival analysis methods were used, accounting for randomisation stratification factors. The trial had 80% power with two-sided α of 5% to detect an absolute increase in 10-year metastasis-free survival from 80% to 86% (hazard ratio [HR] 0·67). Analyses followed the intention-to-treat principle. The trial is registered with the ISRCTN registry, ISRCTN40814031, and ClinicalTrials.gov, NCT00541047. Findings Between Nov 22, 2007, and June 29, 2015, 1480 patients (median age 66 years [IQR 61–69]) were randomly assigned to receive no ADT (n=737) or short-course ADT (n=743) in addition to postoperative radiotherapy at 121 centres in Canada, Denmark, Ireland, and the UK. With a median follow-up of 9·0 years (IQR 7·1–10·1), metastasis-free survival events were reported for 268 participants (142 in the no ADT group and 126 in the short-course ADT group; HR 0·886 [95% CI 0·688–1·140], p=0·35). 10-year metastasis-free survival was 79·2% (95% CI 75·4–82·5) in the no ADT group and 80·4% (76·6–83·6) in the short-course ADT group. Toxicity of grade 3 or higher was reported for 121 (17%) of 737 participants in the no ADT group and 100 (14%) of 743 in the short-course ADT group (p=0·15), with no treatment-related deaths. Interpretation Metastatic disease is uncommon following postoperative bed radiotherapy after radical prostatectomy. Adding 6 months of ADT to this radiotherapy did not improve metastasis-free survival compared with no ADT. These findings do not support the use of short-course ADT with postoperative radiotherapy in this patient population
    corecore