44 research outputs found

    mTORC1 Inhibition Protects Human Regulatory T Cells From Granzyme-B-Induced Apoptosis

    Get PDF
    Regulatory T cells (T-regs) have shown great promise as a means of cellular therapy in a multitude of allo- and auto-immune diseases-due in part to their immunosuppressive potency. Nevertheless, the clinical efficacy of human T-regs in patients has been limited by their poor in vivo homeostasis. To avert apoptosis, T-regs require stable antigenic (CD3 zeta/T-cell-receptor-mediated), co-stimulatory (CD28-driven), and cytokine (IL-2-dependent) signaling. Notably, this sequence of signals supports an activated T-reg phenotype that includes a high expression of granzymes, particularly granzyme B (GrB). Previously, we have shown that aside from the functional effects of GrB in lysing target cells to modulate allo-immunity, GrB can leak out of the intracellular lysosomal granules of host T-regs, initiating pro-apoptotic pathways. Here, we assessed the role of inhibiting mechanistic target of rapamycin complex 1 (mTORC1), a recently favored drug target in the transplant field, in regulating human T-reg apoptosis via GrB. Using ex vivo models of human T-reg culture and a humanized mouse model of human skin allotransplantation, we found that by inhibiting mTORC1 using rapamycin, intracytoplasmic expression and functionality of GrB diminished in host T-regs; lowering human T-reg apoptosis by in part decreasing the phosphorylation of S6K and c-Jun. These findings support the already clinically validated effects of mTORC1 inhibition in patients, most notably their stabilization of T-reg bioactivity and in vivo homeostasis

    Antimicrobial resistance among migrants in Europe: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Rates of antimicrobial resistance (AMR) are rising globally and there is concern that increased migration is contributing to the burden of antibiotic resistance in Europe. However, the effect of migration on the burden of AMR in Europe has not yet been comprehensively examined. Therefore, we did a systematic review and meta-analysis to identify and synthesise data for AMR carriage or infection in migrants to Europe to examine differences in patterns of AMR across migrant groups and in different settings. METHODS: For this systematic review and meta-analysis, we searched MEDLINE, Embase, PubMed, and Scopus with no language restrictions from Jan 1, 2000, to Jan 18, 2017, for primary data from observational studies reporting antibacterial resistance in common bacterial pathogens among migrants to 21 European Union-15 and European Economic Area countries. To be eligible for inclusion, studies had to report data on carriage or infection with laboratory-confirmed antibiotic-resistant organisms in migrant populations. We extracted data from eligible studies and assessed quality using piloted, standardised forms. We did not examine drug resistance in tuberculosis and excluded articles solely reporting on this parameter. We also excluded articles in which migrant status was determined by ethnicity, country of birth of participants' parents, or was not defined, and articles in which data were not disaggregated by migrant status. Outcomes were carriage of or infection with antibiotic-resistant organisms. We used random-effects models to calculate the pooled prevalence of each outcome. The study protocol is registered with PROSPERO, number CRD42016043681. FINDINGS: We identified 2274 articles, of which 23 observational studies reporting on antibiotic resistance in 2319 migrants were included. The pooled prevalence of any AMR carriage or AMR infection in migrants was 25·4% (95% CI 19·1-31·8; I2 =98%), including meticillin-resistant Staphylococcus aureus (7·8%, 4·8-10·7; I2 =92%) and antibiotic-resistant Gram-negative bacteria (27·2%, 17·6-36·8; I2 =94%). The pooled prevalence of any AMR carriage or infection was higher in refugees and asylum seekers (33·0%, 18·3-47·6; I2 =98%) than in other migrant groups (6·6%, 1·8-11·3; I2 =92%). The pooled prevalence of antibiotic-resistant organisms was slightly higher in high-migrant community settings (33·1%, 11·1-55·1; I2 =96%) than in migrants in hospitals (24·3%, 16·1-32·6; I2 =98%). We did not find evidence of high rates of transmission of AMR from migrant to host populations. INTERPRETATION: Migrants are exposed to conditions favouring the emergence of drug resistance during transit and in host countries in Europe. Increased antibiotic resistance among refugees and asylum seekers and in high-migrant community settings (such as refugee camps and detention facilities) highlights the need for improved living conditions, access to health care, and initiatives to facilitate detection of and appropriate high-quality treatment for antibiotic-resistant infections during transit and in host countries. Protocols for the prevention and control of infection and for antibiotic surveillance need to be integrated in all aspects of health care, which should be accessible for all migrant groups, and should target determinants of AMR before, during, and after migration. FUNDING: UK National Institute for Health Research Imperial Biomedical Research Centre, Imperial College Healthcare Charity, the Wellcome Trust, and UK National Institute for Health Research Health Protection Research Unit in Healthcare-associated Infections and Antimictobial Resistance at Imperial College London

    Global economic burden of unmet surgical need for appendicitis

    Get PDF
    Background: There is a substantial gap in provision of adequate surgical care in many low-and middle-income countries. This study aimed to identify the economic burden of unmet surgical need for the common condition of appendicitis. Methods: Data on the incidence of appendicitis from 170 countries and two different approaches were used to estimate numbers of patients who do not receive surgery: as a fixed proportion of the total unmet surgical need per country (approach 1); and based on country income status (approach 2). Indirect costs with current levels of access and local quality, and those if quality were at the standards of high-income countries, were estimated. A human capital approach was applied, focusing on the economic burden resulting from premature death and absenteeism. Results: Excess mortality was 4185 per 100 000 cases of appendicitis using approach 1 and 3448 per 100 000 using approach 2. The economic burden of continuing current levels of access and local quality was US 92492millionusingapproach1and92 492 million using approach 1 and 73 141 million using approach 2. The economic burden of not providing surgical care to the standards of high-income countries was 95004millionusingapproach1and95 004 million using approach 1 and 75 666 million using approach 2. The largest share of these costs resulted from premature death (97.7 per cent) and lack of access (97.0 per cent) in contrast to lack of quality. Conclusion: For a comparatively non-complex emergency condition such as appendicitis, increasing access to care should be prioritized. Although improving quality of care should not be neglected, increasing provision of care at current standards could reduce societal costs substantially

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study

    Get PDF
    Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world. Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231. Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001). Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication

    Burden of disease scenarios for 204 countries and territories, 2022–2050: a forecasting analysis for the Global Burden of Disease Study 2021

    Get PDF
    Background: Future trends in disease burden and drivers of health are of great interest to policy makers and the public at large. This information can be used for policy and long-term health investment, planning, and prioritisation. We have expanded and improved upon previous forecasts produced as part of the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) and provide a reference forecast (the most likely future), and alternative scenarios assessing disease burden trajectories if selected sets of risk factors were eliminated from current levels by 2050. Methods: Using forecasts of major drivers of health such as the Socio-demographic Index (SDI; a composite measure of lag-distributed income per capita, mean years of education, and total fertility under 25 years of age) and the full set of risk factor exposures captured by GBD, we provide cause-specific forecasts of mortality, years of life lost (YLLs), years lived with disability (YLDs), and disability-adjusted life-years (DALYs) by age and sex from 2022 to 2050 for 204 countries and territories, 21 GBD regions, seven super-regions, and the world. All analyses were done at the cause-specific level so that only risk factors deemed causal by the GBD comparative risk assessment influenced future trajectories of mortality for each disease. Cause-specific mortality was modelled using mixed-effects models with SDI and time as the main covariates, and the combined impact of causal risk factors as an offset in the model. At the all-cause mortality level, we captured unexplained variation by modelling residuals with an autoregressive integrated moving average model with drift attenuation. These all-cause forecasts constrained the cause-specific forecasts at successively deeper levels of the GBD cause hierarchy using cascading mortality models, thus ensuring a robust estimate of cause-specific mortality. For non-fatal measures (eg, low back pain), incidence and prevalence were forecasted from mixed-effects models with SDI as the main covariate, and YLDs were computed from the resulting prevalence forecasts and average disability weights from GBD. Alternative future scenarios were constructed by replacing appropriate reference trajectories for risk factors with hypothetical trajectories of gradual elimination of risk factor exposure from current levels to 2050. The scenarios were constructed from various sets of risk factors: environmental risks (Safer Environment scenario), risks associated with communicable, maternal, neonatal, and nutritional diseases (CMNNs; Improved Childhood Nutrition and Vaccination scenario), risks associated with major non-communicable diseases (NCDs; Improved Behavioural and Metabolic Risks scenario), and the combined effects of these three scenarios. Using the Shared Socioeconomic Pathways climate scenarios SSP2-4.5 as reference and SSP1-1.9 as an optimistic alternative in the Safer Environment scenario, we accounted for climate change impact on health by using the most recent Intergovernmental Panel on Climate Change temperature forecasts and published trajectories of ambient air pollution for the same two scenarios. Life expectancy and healthy life expectancy were computed using standard methods. The forecasting framework includes computing the age-sex-specific future population for each location and separately for each scenario. 95% uncertainty intervals (UIs) for each individual future estimate were derived from the 2·5th and 97·5th percentiles of distributions generated from propagating 500 draws through the multistage computational pipeline. Findings: In the reference scenario forecast, global and super-regional life expectancy increased from 2022 to 2050, but improvement was at a slower pace than in the three decades preceding the COVID-19 pandemic (beginning in 2020). Gains in future life expectancy were forecasted to be greatest in super-regions with comparatively low life expectancies (such as sub-Saharan Africa) compared with super-regions with higher life expectancies (such as the high-income super-region), leading to a trend towards convergence in life expectancy across locations between now and 2050. At the super-region level, forecasted healthy life expectancy patterns were similar to those of life expectancies. Forecasts for the reference scenario found that health will improve in the coming decades, with all-cause age-standardised DALY rates decreasing in every GBD super-region. The total DALY burden measured in counts, however, will increase in every super-region, largely a function of population ageing and growth. We also forecasted that both DALY counts and age-standardised DALY rates will continue to shift from CMNNs to NCDs, with the most pronounced shifts occurring in sub-Saharan Africa (60·1% [95% UI 56·8–63·1] of DALYs were from CMNNs in 2022 compared with 35·8% [31·0–45·0] in 2050) and south Asia (31·7% [29·2–34·1] to 15·5% [13·7–17·5]). This shift is reflected in the leading global causes of DALYs, with the top four causes in 2050 being ischaemic heart disease, stroke, diabetes, and chronic obstructive pulmonary disease, compared with 2022, with ischaemic heart disease, neonatal disorders, stroke, and lower respiratory infections at the top. The global proportion of DALYs due to YLDs likewise increased from 33·8% (27·4–40·3) to 41·1% (33·9–48·1) from 2022 to 2050, demonstrating an important shift in overall disease burden towards morbidity and away from premature death. The largest shift of this kind was forecasted for sub-Saharan Africa, from 20·1% (15·6–25·3) of DALYs due to YLDs in 2022 to 35·6% (26·5–43·0) in 2050. In the assessment of alternative future scenarios, the combined effects of the scenarios (Safer Environment, Improved Childhood Nutrition and Vaccination, and Improved Behavioural and Metabolic Risks scenarios) demonstrated an important decrease in the global burden of DALYs in 2050 of 15·4% (13·5–17·5) compared with the reference scenario, with decreases across super-regions ranging from 10·4% (9·7–11·3) in the high-income super-region to 23·9% (20·7–27·3) in north Africa and the Middle East. The Safer Environment scenario had its largest decrease in sub-Saharan Africa (5·2% [3·5–6·8]), the Improved Behavioural and Metabolic Risks scenario in north Africa and the Middle East (23·2% [20·2–26·5]), and the Improved Nutrition and Vaccination scenario in sub-Saharan Africa (2·0% [–0·6 to 3·6]). Interpretation: Globally, life expectancy and age-standardised disease burden were forecasted to improve between 2022 and 2050, with the majority of the burden continuing to shift from CMNNs to NCDs. That said, continued progress on reducing the CMNN disease burden will be dependent on maintaining investment in and policy emphasis on CMNN disease prevention and treatment. Mostly due to growth and ageing of populations, the number of deaths and DALYs due to all causes combined will generally increase. By constructing alternative future scenarios wherein certain risk exposures are eliminated by 2050, we have shown that opportunities exist to substantially improve health outcomes in the future through concerted efforts to prevent exposure to well established risk factors and to expand access to key health interventions

    Training during the COVID-19 lockdown : knowledge, beliefs, and practices of 12,526 athletes from 142 countries and six continents

    Get PDF
    OBJECTIVE Our objective was to explore the training-related knowledge, beliefs, and practices of athletes and the influence of lockdowns in response to the coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). METHODS Athletes (n = 12,526, comprising 13% world class, 21% international, 36% national, 24% state, and 6% recreational) completed an online survey that was available from 17 May to 5 July 2020 and explored their training behaviors (training knowledge, beliefs/attitudes, and practices), including specific questions on their training intensity, frequency, and session duration before and during lockdown (March–June 2020). RESULTS Overall, 85% of athletes wanted to “maintain training,” and 79% disagreed with the statement that it is “okay to not train during lockdown,” with a greater prevalence for both in higher-level athletes. In total, 60% of athletes considered “coaching by correspondence (remote coaching)” to be sufficient (highest amongst world-class athletes). During lockdown, < 40% were able to maintain sport-specific training (e.g., long endurance [39%], interval training [35%], weightlifting [33%], most (83%) training for “general fitness and health maintenance” during lockdown. Athletes trained alone (80%) and focused on bodyweight (65%) and cardiovascular (59%) exercise/training during lockdown. Compared with before lockdown, most athletes reported reduced training frequency (from between five and seven sessions per week to four or fewer), shorter training sessions (from ≥ 60 to < 60 min), and lower sport-specific intensity (~ 38% reduction), irrespective of athlete classification. CONCLUSIONS COVID-19-related lockdowns saw marked reductions in athletic training specificity, intensity, frequency, and duration, with notable within-sample differences (by athlete classification). Higher classification athletes had the strongest desire to “maintain” training and the greatest opposition to “not training” during lockdowns. These higher classification athletes retained training specificity to a greater degree than others, probably because of preferential access to limited training resources. More higher classification athletes considered “coaching by correspondence” as sufficient than did lower classification athletes. These lockdown-mediated changes in training were not conducive to maintenance or progression of athletes’ physical capacities and were also likely detrimental to athletes’ mental health. These data can be used by policy makers, athletes, and their multidisciplinary teams to modulate their practice, with a degree of individualization, in the current and continued pandemic-related scenario. Furthermore, the data may drive training-related educational resources for athletes and their multidisciplinary teams. Such upskilling would provide athletes with evidence to inform their training modifications in response to germane situations (e.g., COVID related, injury, and illness).A specific funding was provided by the National Sports Institute of Malaysia for this study.The National Sports Institute of Malaysia.https://www.springer.com/journal/40279am2023Sports Medicin

    COVID-19 lockdown : a global study investigating athletes’ sport classification and sex on training practices

    Get PDF
    PURPOSE : To investigate differences in athletes’ knowledge, beliefs, and training practices during COVID-19 lockdowns with reference to sport classification and sex. This work extends an initial descriptive evaluation focusing on athlete classification. METHODS : Athletes (12,526; 66% male; 142 countries) completed an online survey (May–July 2020) assessing knowledge, beliefs, and practices toward training. Sports were classified as team sports (45%), endurance (20%), power/technical (10%), combat (9%), aquatic (6%), recreational (4%), racquet (3%), precision (2%), parasports (1%), and others (1%). Further analysis by sex was performed. RESULTS : During lockdown, athletes practiced body-weight-based exercises routinely (67% females and 64% males), ranging from 50% (precision) to 78% (parasports). More sport-specific technical skills were performed in combat, parasports, and precision (∼50%) than other sports (∼35%). Most athletes (range: 50% [parasports] to 75% [endurance]) performed cardiorespiratory training (trivial sex differences). Compared to prelockdown, perceived training intensity was reduced by 29% to 41%, depending on sport (largest decline: ∼38% in team sports, unaffected by sex). Some athletes (range: 7%–49%) maintained their training intensity for strength, endurance, speed, plyometric, change-of-direction, and technical training. Athletes who previously trained ≥5 sessions per week reduced their volume (range: 18%–28%) during lockdown. The proportion of athletes (81%) training ≥60 min/session reduced by 31% to 43% during lockdown. Males and females had comparable moderate levels of training knowledge (56% vs 58%) and beliefs/attitudes (54% vs 56%). CONCLUSIONS : Changes in athletes’ training practices were sport-specific, with few or no sex differences. Team-based sports were generally more susceptible to changes than individual sports. Policy makers should provide athletes with specific training arrangements and educational resources to facilitate remote and/or home-based training during lockdown-type events.https://journals.humankinetics.com/view/journals/ijspp/ijspp-overview.xmlhj2023Sports Medicin

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Exploring UK medical school differences: the MedDifs study of selection, teaching, student and F1 perceptions, postgraduate outcomes and fitness to practise.

    Get PDF
    BACKGROUND: Medical schools differ, particularly in their teaching, but it is unclear whether such differences matter, although influential claims are often made. The Medical School Differences (MedDifs) study brings together a wide range of measures of UK medical schools, including postgraduate performance, fitness to practise issues, specialty choice, preparedness, satisfaction, teaching styles, entry criteria and institutional factors. METHOD: Aggregated data were collected for 50 measures across 29 UK medical schools. Data include institutional history (e.g. rate of production of hospital and GP specialists in the past), curricular influences (e.g. PBL schools, spend per student, staff-student ratio), selection measures (e.g. entry grades), teaching and assessment (e.g. traditional vs PBL, specialty teaching, self-regulated learning), student satisfaction, Foundation selection scores, Foundation satisfaction, postgraduate examination performance and fitness to practise (postgraduate progression, GMC sanctions). Six specialties (General Practice, Psychiatry, Anaesthetics, Obstetrics and Gynaecology, Internal Medicine, Surgery) were examined in more detail. RESULTS: Medical school differences are stable across time (median alpha = 0.835). The 50 measures were highly correlated, 395 (32.2%) of 1225 correlations being significant with p < 0.05, and 201 (16.4%) reached a Tukey-adjusted criterion of p < 0.0025. Problem-based learning (PBL) schools differ on many measures, including lower performance on postgraduate assessments. While these are in part explained by lower entry grades, a surprising finding is that schools such as PBL schools which reported greater student satisfaction with feedback also showed lower performance at postgraduate examinations. More medical school teaching of psychiatry, surgery and anaesthetics did not result in more specialist trainees. Schools that taught more general practice did have more graduates entering GP training, but those graduates performed less well in MRCGP examinations, the negative correlation resulting from numbers of GP trainees and exam outcomes being affected both by non-traditional teaching and by greater historical production of GPs. Postgraduate exam outcomes were also higher in schools with more self-regulated learning, but lower in larger medical schools. A path model for 29 measures found a complex causal nexus, most measures causing or being caused by other measures. Postgraduate exam performance was influenced by earlier attainment, at entry to Foundation and entry to medical school (the so-called academic backbone), and by self-regulated learning. Foundation measures of satisfaction, including preparedness, had no subsequent influence on outcomes. Fitness to practise issues were more frequent in schools producing more male graduates and more GPs. CONCLUSIONS: Medical schools differ in large numbers of ways that are causally interconnected. Differences between schools in postgraduate examination performance, training problems and GMC sanctions have important implications for the quality of patient care and patient safety
    corecore