22 research outputs found

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    F4+ ETEC infection and oral immunization with F4 fimbriae elicits an IL-17-dominated immune response

    Get PDF
    Enterotoxigenic Escherichia coli (ETEC) are an important cause of post-weaning diarrhea (PWD) in piglets. Porcine-specific ETEC strains possess different fimbrial subtypes of which F4 fimbriae are the most frequently associated with ETEC-induced diarrhea in piglets. These F4 fimbriae are potent oral immunogens that induce protective F4-specific IgA antibody secreting cells at intestinal tissues. Recently, T-helper 17 (Th17) cells have been implicated in the protection of the host against extracellular pathogens. However, it remains unknown if Th17 effector responses are needed to clear ETEC infections. In the present study, we aimed to elucidate if ETEC elicits a Th17 response in piglets and if F4 fimbriae trigger a similar response. F4+ ETEC infection upregulated IL-17A, IL-17F, IL-21 and IL-23p19, but not IL-12 and IFN-γ mRNA expression in the systemic and mucosal immune system. Similarly, oral immunization with F4 fimbriae triggered a Th17 signature evidenced by an upregulated mRNA expression of IL-17F, RORγt, IL-23p19 and IL-21 in the peripheral blood mononuclear cells (PBMCs). Intriguingly, IL-17A mRNA levels were unaltered. To further evaluate this difference between systemic and mucosal immune responses, we assayed the cytokine mRNA profile of F4 fimbriae stimulated PBMCs. F4 fimbriae induced IL-17A, IL-17F, IL-22 and IL-23p19, but downregulated IL-17B mRNA expression. Altogether, these data indicate a Th17 dominated response upon oral immunization with F4 fimbriae and F4+ ETEC infection. Our work also highlights that IL-17B and IL-17F participate in the immune response to protect the host against F4+ ETEC infection and could aid in the design of future ETEC vaccines

    Repurposing NGO data for better research outcomes: A scoping review of the use and secondary analysis of NGO data in health policy and systems research

    Get PDF
    Background Non-government organisations (NGOs) collect and generate vast amounts of potentially rich data, most of which are not used for research purposes. Secondary analysis of NGO data (their use and analysis in a study for which they were not originally collected) presents an important but largely unrealised opportunity to provide new research insights in critical areas including the evaluation of health policy and programmes. Methods A scoping review of the published literature was performed to identify the extent to which secondary analysis of NGO data has been used in health policy and systems research (HPSR). A tiered analytic approach provided a comprehensive overview and descriptive analyses of the studies which: 1) used data produced or collected by or about NGOs; 2) performed secondary analysis of the NGO data (beyond use of an NGO report as a supporting reference); 3) used NGO-collected clinical data. Results Of the 156 studies which performed secondary analysis of NGO-produced or collected data, 64% (n=100) used NGO-produced reports (e.g. to critique NGO activities and as a contextual reference) and 8% (n=13) analysed NGO-collected clinical data.. Of the studies, 55% investigated service delivery research topics, with 48% undertaken in developing countries and 17% in both developing and developed. NGO-collected clinical data enabled HPSR within marginalised groups (e.g. migrants, people in conflict-affected areas), with some limitations such as inconsistencies and missing data. Conclusion We found evidence that NGO-collected and produced data are most commonly perceived as a source of supporting evidence for HPSR and not as primary source data. However, these data can facilitate research in under-researched marginalised groups and in contexts that are hard to reach by academics, such as conflict-affected areas. NGO–academic collaboration could help address issues of NGO data quality to facilitate their more widespread use in research. Their use could enable relevant and timely research in the areas of health policy, programme evaluation and advocacy to improve health and reduce health inequalities, especially in marginalised groups and developing countries

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    CT and 3-T MRI accurately identify T3c disease in colon cancer, which strongly predicts disease-free survival

    No full text
    Aim: To compare the preoperative staging accuracy of computed tomography (CT) and 3-T magnetic resonance imaging (MRI) in colon cancer, and to investigate the prognostic significance of identified risk factors. Materials and methods: Fifty-eight patients undergoing primary resection of their colon cancer were prospectively recruited, with 53 patients included for final analysis. Accuracy of CT and MRI were compared for two readers, using postoperative histology as the reference standard. Patients were followed-up for a median of 39 months. Risk factors were compared by modality and reader in terms of metachronous metastases and disease-free survival (DFS), stratified for adjuvant chemotherapy. Results: Accuracy for the identification of T3c+ disease was non-significantly greater on MRI (75% and 79%) than CT (70% and 77%). Differences in the accuracy of MRI and CT for identification of T3+ disease (MRI 75% and 57%, CT 72% and 66%) and N+ disease (MRI 62% and 63%, CT 62% and 56%) were also non-significant. Identification of extramural venous invasion (EMVI+) disease was significantly greater on MRI (75% and 75%) than CT (79% and 54%) for one reader (p=0.029). T3c+ disease at histopathology was the only risk factor that demonstrated a significant difference in rate of metachronous metastases (odds ratio [OR] 8.6, p=0.0044) and DFS stratified for adjuvant therapy (OR=4, p=0.048). Conclusion: T3c or greater disease is the strongest risk factor for predicting DFS in colon cancer, and is accurately identified on imaging. T3c+ disease may therefore be the best imaging entry criteria for trials of neoadjuvant treatment

    Variation in global uptake of the Surgical Safety Checklist

    No full text
    Background: The Surgical Safety Checklist (SSC) is a patient safety tool shown to reduce mortality and to improve teamwork and adherence with perioperative safety practices. The results of the original pilot work were published 10 years ago. This study aimed to determine the contemporary prevalence and predictors of SSC use globally. Methods: Pooled data from the GlobalSurg and Surgical Outcomes studies were analysed to describe SSC use in 2014-2016. The primary exposure was the Human Development Index (HDI) of the reporting country, and the primary outcome was reported SSC use. A generalized estimating equation, clustering by facility, was used to determine differences in SSC use by patient, facility and national characteristics. Results: A total of 85 957 patients from 1464 facilities in 94 countries were included. On average, facilities used the SSC in 75·4 per cent of operations. Compared with very high HDI, SSC use was less in low HDI countries (odds ratio (OR) 0·08, 95 per cent c.i. 0·05 to 0·12). The SSC was used less in urgent compared with elective operations in low HDI countries (OR 0·68, 0·53 to 0·86), but used equally for urgent and elective operations in very high HDI countries (OR 0·96, 0·87 to 1·06). SSC use was lower for obstetrics and gynaecology versus abdominal surgery (OR 0·91, 0·85 to 0·98) and where the common or official language was not one of the WHO official languages (OR 0·30, 0·23 to 0·39). Conclusion: Worldwide, SSC use is generally high, but significant variability exists. Implementation and dissemination strategies must be developed to address this variability
    corecore