137 research outputs found

    The Seroprevalence and Seroincidence of Enterovirus71 Infection in Infants and Children in Ho Chi Minh City, Viet Nam

    Get PDF
    Enterovirus 71 (EV71)-associated hand, foot and mouth disease has emerged as a serious public health problem in South East Asia over the last decade. To better understand the prevalence of EV71 infection, we determined EV71 seroprevalence and seroincidence amongst healthy infants and children in Ho Chi Minh City, Viet Nam. In a cohort of 200 newborns, 55% of cord blood samples contained EV71 neutralizing antibodies and these decayed to undetectable levels by 6 months of age in 98% of infants. The EV71 neutralizing antibody seroconversion rate was 5.6% in the first year and 14% in the second year of life. In children 5–15 yrs of age, seroprevalence of EV71 neutralizing antibodies was 84% and in cord blood it was 55%. Taken together, these data suggest EV71 force of infection is high and highlights the need for more research into its epidemiology and pathogenesis in high disease burden countries

    The Experience of Quality in Higher Education in the United Arab Emirates: In Times of Rapid Change and Complexities

    Get PDF
    In less than five decades, from offering formal education only in a few schools to a small tribal community to providing a selection of three public and approximately 100 private higher education institutions to the citizens of seven emirates creates a unique context in the United Arab Emirates (UAE). It is an evolution that corresponds with its remarkable economic growth. Quality assurance of diverse higher educational institutions requires complex schemes to ensure their fitness for purpose, while perhaps development and enhancement aspects need time to mature. The quality of the education is especially important because the UAE yearns for the diversified and knowledge-based economy; one that is led by its own citizens whose contribution to the workforce is currently less than 10%. This chapter highlights contextual complexities in the UAE that might have direct and/or indirect impacts on the quality experiences in the higher education sector, with proposed recommendations

    Pneumonic Tularemia in Rabbits Resembles the Human Disease as Illustrated by Radiographic and Hematological Changes after Infection

    Get PDF
    Background: Pneumonic tularemia is caused by inhalation of the gram negative bacterium, Francisella tularensis. Because of concerns that tularemia could be used as a bioterrorism agent, vaccines and therapeutics are urgently needed. Animal models of pneumonic tularemia with a pathophysiology similar to the human disease are needed to evaluate the efficacy of these potential medical countermeasures. Principal Findings: Rabbits exposed to aerosols containing Francisella tularensis strain SCHU S4 developed a rapidly progressive fatal pneumonic disease. Clinical signs became evident on the third day after exposure with development of a fever (>40.5°C) and a sharp decline in both food and water intake. Blood samples collected on day 4 found lymphopenia and a decrease in platelet counts coupled with elevations in erythrocyte sedimentation rate, alanine aminotransferase, cholesterol, granulocytes and monocytes. Radiographs demonstrated the development of pneumonia and abnormalities of intestinal gas consistent with ileus. On average, rabbits were moribund 5.1 days after exposure; no rabbits survived exposure at any dose (190-54,000 cfu). Gross evaluation of tissues taken at necropsy showed evidence of pathology in the lungs, spleen, liver, kidney and intestines. Bacterial counts confirmed bacterial dissemination from the lungs to the liver and spleen. Conclusions/Significance: The pathophysiology of pneumonic tularemia in rabbits resembles what has been reported for humans. Rabbits therefore are a relevant model of the human disease caused by type A strains of F. tularensis. © 2011 Reed et al

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    Mortality from gastrointestinal congenital anomalies at 264 hospitals in 74 low-income, middle-income, and high-income countries: a multicentre, international, prospective cohort study

    Get PDF
    Background: Congenital anomalies are the fifth leading cause of mortality in children younger than 5 years globally. Many gastrointestinal congenital anomalies are fatal without timely access to neonatal surgical care, but few studies have been done on these conditions in low-income and middle-income countries (LMICs). We compared outcomes of the seven most common gastrointestinal congenital anomalies in low-income, middle-income, and high-income countries globally, and identified factors associated with mortality. // Methods: We did a multicentre, international prospective cohort study of patients younger than 16 years, presenting to hospital for the first time with oesophageal atresia, congenital diaphragmatic hernia, intestinal atresia, gastroschisis, exomphalos, anorectal malformation, and Hirschsprung's disease. Recruitment was of consecutive patients for a minimum of 1 month between October, 2018, and April, 2019. We collected data on patient demographics, clinical status, interventions, and outcomes using the REDCap platform. Patients were followed up for 30 days after primary intervention, or 30 days after admission if they did not receive an intervention. The primary outcome was all-cause, in-hospital mortality for all conditions combined and each condition individually, stratified by country income status. We did a complete case analysis. // Findings: We included 3849 patients with 3975 study conditions (560 with oesophageal atresia, 448 with congenital diaphragmatic hernia, 681 with intestinal atresia, 453 with gastroschisis, 325 with exomphalos, 991 with anorectal malformation, and 517 with Hirschsprung's disease) from 264 hospitals (89 in high-income countries, 166 in middle-income countries, and nine in low-income countries) in 74 countries. Of the 3849 patients, 2231 (58·0%) were male. Median gestational age at birth was 38 weeks (IQR 36–39) and median bodyweight at presentation was 2·8 kg (2·3–3·3). Mortality among all patients was 37 (39·8%) of 93 in low-income countries, 583 (20·4%) of 2860 in middle-income countries, and 50 (5·6%) of 896 in high-income countries (p<0·0001 between all country income groups). Gastroschisis had the greatest difference in mortality between country income strata (nine [90·0%] of ten in low-income countries, 97 [31·9%] of 304 in middle-income countries, and two [1·4%] of 139 in high-income countries; p≤0·0001 between all country income groups). Factors significantly associated with higher mortality for all patients combined included country income status (low-income vs high-income countries, risk ratio 2·78 [95% CI 1·88–4·11], p<0·0001; middle-income vs high-income countries, 2·11 [1·59–2·79], p<0·0001), sepsis at presentation (1·20 [1·04–1·40], p=0·016), higher American Society of Anesthesiologists (ASA) score at primary intervention (ASA 4–5 vs ASA 1–2, 1·82 [1·40–2·35], p<0·0001; ASA 3 vs ASA 1–2, 1·58, [1·30–1·92], p<0·0001]), surgical safety checklist not used (1·39 [1·02–1·90], p=0·035), and ventilation or parenteral nutrition unavailable when needed (ventilation 1·96, [1·41–2·71], p=0·0001; parenteral nutrition 1·35, [1·05–1·74], p=0·018). Administration of parenteral nutrition (0·61, [0·47–0·79], p=0·0002) and use of a peripherally inserted central catheter (0·65 [0·50–0·86], p=0·0024) or percutaneous central line (0·69 [0·48–1·00], p=0·049) were associated with lower mortality. // Interpretation: Unacceptable differences in mortality exist for gastrointestinal congenital anomalies between low-income, middle-income, and high-income countries. Improving access to quality neonatal surgical care in LMICs will be vital to achieve Sustainable Development Goal 3.2 of ending preventable deaths in neonates and children younger than 5 years by 2030

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Malaria endemicity and co-infection with tissue-dwelling parasites in Sub-Saharan Africa: a review

    Get PDF

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Adjusting soluble transferrin receptor concentrations for inflammation: Biomarkers Reflecting Inflammation and Nutritional Determinants of Anemia (BRINDA) project.

    Get PDF
    Background: Iron deficiency is thought to be one of the most prevalent micronutrient deficiencies globally, but an accurate assessment in populations who are frequently exposed to infections is impeded by the inflammatory response, which causes iron-biomarker alterations.Objectives: We assessed the relation between soluble transferrin receptor (sTfR) concentrations and inflammation and malaria in preschool children (PSC) (age range: 6-59 mo) and women of reproductive age (WRA) (age range: 15-49 y) and investigated adjustment algorithms to account for these effects.Design: Cross-sectional data from the Biomarkers Reflecting the Inflammation and Nutritional Determinants of Anemia (BRINDA) project from 11,913 PSC in 11 surveys and from 11,173 WRA in 7 surveys were analyzed individually and combined with the use of a meta-analysis. The following 3 adjustment approaches were compared with estimated iron-deficient erythropoiesis (sTfR concentration >8.3 mg/L): 1) the exclusion of individuals with C-reactive protein (CRP) concentrations >5 mg/L or α-1-acid glycoprotein (AGP) concentrations >1 g/L, 2) the application of arithmetic correction factors, and 3) the use of regression approaches.Results: The prevalence of elevated sTfR concentrations incrementally decreased as CRP and AGP deciles decreased for PSC and WRA, but the effect was more pronounced for AGP than for CRP. Depending on the approach used to adjust for inflammation, the estimated prevalence of iron-deficient erythropoiesis decreased by 4.4-14.6 and 0.3-9.5 percentage points in PSC and WRA, respectively, compared with unadjusted values. The correction-factor approach yielded a more modest reduction in the estimated prevalence of iron-deficient erythropoiesis than did the regression approach. Mostly, adjustment for malaria in addition to AGP did not significantly change the estimated prevalence of iron-deficient erythropoiesis.Conclusions: sTfR may be useful to assess iron-deficient erythropoiesis, but inflammation influences its interpretation, and adjustment of sTfR for inflammation and malaria should be considered. More research is warranted to evaluate the proposed approaches in different settings, but this study contributes to the evidence on how and when to adjust sTfR for inflammation and malaria
    corecore