57 research outputs found

    Relating circulating thyroid hormone concentrations to serum interleukins-6 and -10 in association with non-thyroidal illnesses including chronic renal insufficiency

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Because of the possible role of cytokines including interleukins (IL) in systemic non-thyroidal illnesses' (NTI) pathogenesis and consequently the frequently associated alterations in thyroid hormone (TH) concentrations constituting the euthyroid sick syndrome (ESS), we aimed in this research to elucidate the possible relation between IL-6 & IL-10 and any documented ESS in a cohort of patients with NTI.</p> <p>Methods</p> <p>Sixty patients and twenty healthy volunteers were recruited. The patients were subdivided into three subgroups depending on their underlying NTI and included 20 patients with chronic renal insufficiency (CRI), congestive heart failure (CHF), and ICU patients with myocardial infarction (MI). Determination of the circulating serum levels of IL-6 and IL-10, thyroid stimulating hormone (TSH), as well as total T4 and T3 was carried out.</p> <p>Results</p> <p>In the whole group of patients, we detected a significantly lower T3 and T4 levels compared to control subjects (0.938 ± 0.477 vs 1.345 ± 0.44 nmol/L, p = 0.001 and 47.9 ± 28.41 vs 108 ± 19.49 nmol/L, p < 0.0001 respectively) while the TSH level was normal (1.08+0.518 μIU/L). Further, IL-6 was substantially higher above controls' levels (105.18 ± 72.01 vs 3.35 ± 1.18 ng/L, p < 0.00001) and correlated negatively with both T3 and T4 (r = -0.620, p < 0.0001 & -0.267, p < 0.001, respectively). Similarly was IL-10 level (74.13 ± 52.99 vs 2.64 ± 0.92 ng/ml, p < 0.00001) that correlated negatively with T3 (r = -0.512, p < 0.0001) but not T4. Interestingly, both interleukins correlated positively (r = 0.770, p = <0.001). Moreover, IL-6 (R<sup>2 </sup>= 0.338, p = 0.001) and not IL-10 was a predictor of low T3 levels with only a borderline significance for T4 (R<sup>2 </sup>= 0.082, p = 0.071).</p> <p>By subgroup analysis, the proportion of patients with subnormal T3, T4, and TSH levels was highest in the MI patients (70%, 70%, and 72%, respectively) who displayed the greatest IL-6 and IL-10 concentrations (192.5 ± 45.1 ng/L & 122.95 ± 46.1 ng/L, respectively) compared with CHF (82.95 ± 28.9 ng/L & 69.05 ± 44.0 ng/L, respectively) and CRI patients (40.05 ± 28.9 ng/L & 30.4 ± 10.6 ng/L, respectively). Surprisingly, CRI patients showed the least disturbance in IL-6 and IL-10 despite the lower levels of T3, T4, and TSH in a higher proportion of them compared to CHF patients (40%, 45%, & 26% vs 35%, 25%, & 18%, respectively).</p> <p>Conclusion</p> <p>the high prevalence of ESS we detected in NTI including CRI may be linked to IL-6 and IL-10 alterations. Further, perturbation of IL-6 and not IL-10 might be involved in ESS pathogenesis although it is not the only key player as suggested by our findings in CRI.</p

    Inability to predict postpartum hemorrhage: insights from Egyptian intervention data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Knowledge on how well we can predict primary postpartum hemorrhage (PPH) can help policy makers and health providers design current delivery protocols and PPH case management. The purpose of this paper is to identify risk factors and determine predictive probabilities of those risk factors for primary PPH among women expecting singleton vaginal deliveries in Egypt.</p> <p>Methods</p> <p>From a prospective cohort study, 2510 pregnant women were recruited over a six-month period in Egypt in 2004. PPH was defined as blood loss ≥ 500 ml. Measures of blood loss were made every 20 minutes for the first 4 hours after delivery using a calibrated under the buttocks drape. Using all variables available in the patients' charts, we divided them in ante-partum and intra-partum factors. We employed logistic regression to analyze socio-demographic, medical and past obstetric history, and labor and delivery outcomes as potential PPH risk factors. Post-model predicted probabilities were estimated using the identified risk factors.</p> <p>Results</p> <p>We found a total of 93 cases of primary PPH. In multivariate models, ante-partum hemoglobin, history of previous PPH, labor augmentation and prolonged labor were significantly associated with PPH. Post model probability estimates showed that even among women with three or more risk factors, PPH could only be predicted in 10% of the cases.</p> <p>Conclusions</p> <p>The predictive probability of ante-partum and intra-partum risk factors for PPH is very low. Prevention of PPH to all women is highly recommended.</p

    Efficacy and Safety of Artemether in the Treatment of Chronic Fascioliasis in Egypt: Exploratory Phase-2 Trials

    Get PDF
    Fasciola hepatica and F. gigantica are two liver flukes that parasitize herbivorous large size mammals (e.g., sheep and cattle), as well as humans. A single drug is available to treat infections with Fasciola flukes, namely, triclabendazole. Recently, laboratory studies and clinical trials in sheep and humans suffering from acute fascioliasis have shown that artesunate and artemether (drugs that are widely used against malaria) also show activity against fascioliasis. Hence, we were motivated to assess the efficacy and safety of oral artemether in patients with chronic Fasciola infections. The study was carried out in Egypt and artemether administered according to two different malaria treatment regimens. Cure rates observed with 6×80 mg and 3×200 mg artemether were 35% and 6%, respectively. In addition, high efficacy was observed when triclabendazole, the current drug of choice against human fascioliasis, was administered to patients remaining Fasciola positive following artemether treatment. Concluding, monotherapy with artemether does not represent an alternative to triclabendazole against fascioliasis, but its role in combination chemotherapy regimen remains to be investigated

    Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.

    Get PDF
    BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study

    Get PDF
    AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone
    corecore