119 research outputs found

    Can deficit irrigations be an optimum solution for increasing water productivity under arid conditions? A case study on wheat plants

    Get PDF
    Water scarcity is of growing concern in many countries around the world, especially within the arid and semi-arid zones. Accordingly, rationalizing irrigation water has become an obligation to achieve the sustainable developmental goals of these countries. This may take place via using deficit irrigation which is long thought to be an effective strategy to save and improve water productivity. The current study is a trial to evaluate the pros and cons of using 50 and 75 % of the irrigation requirements (IR) of wheat (deficit irrigations) versus 100 %IR, while precisely charting changes in wheat growth parameters, antioxidant enzymes in plant shoots and the overall nutritional status of plants (NPK contents). Accordingly, a field experiment was conducted for two successive seasons, followed a split-plot design in which deficit irrigations (two irrigations to achieve 50 % of the irrigations requirements (IR), three irrigations to attain 75 % IR, and four irrigations to fulfill 100 % IR) were placed in main plots while four different studied wheat cultivars were in subplots. Results obtained herein indicate that deficit irrigations led to significant reductions in growth parameters and productivity of all wheat cultivars, especially when using 50 % IR. It also decreased NPK contents within plant shoots while elevated their contents of proline, peroxidase, and catalase enzymes. On the other hand, this type of irrigation decreased virtual water content (VWC, the amount of water used in production on ton of wheat grains). Stress tolerance index (STI), and financial revenues per unit area were also assessed. The obtained values of grain productivity, STI, VWC and financial revenues were weighted via PCA analyses, and then introduced in a novel model to estimate the efficiency of deficit irrigations (ODEI) whose results specified that the overall efficiency decreased as follows: 50 %IR < 75 %IR < 100 %IR. In conclusion, deficit irrigation is not deemed appropriate for rationalizing irrigation water while growing wheat on arid soils

    Global economic burden of unmet surgical need for appendicitis

    Get PDF
    Background There is a substantial gap in provision of adequate surgical care in many low- and middle-income countries. This study aimed to identify the economic burden of unmet surgical need for the common condition of appendicitis. Methods Data on the incidence of appendicitis from 170 countries and two different approaches were used to estimate numbers of patients who do not receive surgery: as a fixed proportion of the total unmet surgical need per country (approach 1); and based on country income status (approach 2). Indirect costs with current levels of access and local quality, and those if quality were at the standards of high-income countries, were estimated. A human capital approach was applied, focusing on the economic burden resulting from premature death and absenteeism. Results Excess mortality was 4185 per 100 000 cases of appendicitis using approach 1 and 3448 per 100 000 using approach 2. The economic burden of continuing current levels of access and local quality was USD 92 492 million using approach 1 and USD 73 141 million using approach 2. The economic burden of not providing surgical care to the standards of high-income countries was USD 95 004 million using approach 1 and USD 75 666 million using approach 2. The largest share of these costs resulted from premature death (97.7 per cent) and lack of access (97.0 per cent) in contrast to lack of quality. Conclusion For a comparatively non-complex emergency condition such as appendicitis, increasing access to care should be prioritized. Although improving quality of care should not be neglected, increasing provision of care at current standards could reduce societal costs substantially.publishedVersio

    Quality Assessment of Published Systematic Reviews in High Impact Cardiology Journals: Revisiting the Evidence Pyramid

    Get PDF
    Objective: Systematic reviews are increasingly used as sources of evidence in clinical cardiology guidelines. In the present study, we aimed to assess the quality of published systematic reviews in high impact cardiology journals.Methods: We searched PubMed for systematic reviews published between 2010 and 2019 in five general cardiology journals with the highest impact factor (according to Clarivate Analytics 2019). We extracted data on eligibility criteria, methodological characteristics, bias assessments, and sources of funding. Further, we assessed the quality of retrieved reviews using the AMSTAR tool.Results: A total of 352 systematic reviews were assessed. The AMSTAR quality score was low or critically low in 71% (95% CI: 65.7–75.4) of the assessed reviews. Sixty-four reviews (18.2%, 95% CI: 14.5–22.6) registered/published their protocol. Only 221 reviews (62.8%, 95% CI: 57.6–67.7) reported adherence to the EQUATOR checklists, 208 reviews (58.4%, 95% CI: 53.9–64.1) assessed the risk of bias in the included studies, and 177 reviews (52.3%, 95% CI: 45.1–55.5) assessed the risk of publication bias in their primary outcome analysis. The primary outcome was statistically significant in 274 (79.6%, 95% CI: 75.1–83.6) and had statistical heterogeneity in 167 (48.5%, 95% CI: 43.3–53.8) reviews. The use and sources of external funding was not disclosed in 87 reviews (24.7%, 95% CI: 20.5–29.5). Data analysis showed that the existence of publication bias was significantly associated with statistical heterogeneity of the primary outcome and that complex design, larger sample size, and higher AMSTAR quality score were associated with higher citation metrics.Conclusion: Our analysis uncovered widespread gaps in conducting and reporting systematic reviews in cardiology. These findings highlight the importance of rigorous editorial and peer review policies in systematic review publishing, as well as education of the investigators and clinicians on the synthesis and interpretation of evidence

    Antimicrobial resistance among migrants in Europe: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Rates of antimicrobial resistance (AMR) are rising globally and there is concern that increased migration is contributing to the burden of antibiotic resistance in Europe. However, the effect of migration on the burden of AMR in Europe has not yet been comprehensively examined. Therefore, we did a systematic review and meta-analysis to identify and synthesise data for AMR carriage or infection in migrants to Europe to examine differences in patterns of AMR across migrant groups and in different settings. METHODS: For this systematic review and meta-analysis, we searched MEDLINE, Embase, PubMed, and Scopus with no language restrictions from Jan 1, 2000, to Jan 18, 2017, for primary data from observational studies reporting antibacterial resistance in common bacterial pathogens among migrants to 21 European Union-15 and European Economic Area countries. To be eligible for inclusion, studies had to report data on carriage or infection with laboratory-confirmed antibiotic-resistant organisms in migrant populations. We extracted data from eligible studies and assessed quality using piloted, standardised forms. We did not examine drug resistance in tuberculosis and excluded articles solely reporting on this parameter. We also excluded articles in which migrant status was determined by ethnicity, country of birth of participants' parents, or was not defined, and articles in which data were not disaggregated by migrant status. Outcomes were carriage of or infection with antibiotic-resistant organisms. We used random-effects models to calculate the pooled prevalence of each outcome. The study protocol is registered with PROSPERO, number CRD42016043681. FINDINGS: We identified 2274 articles, of which 23 observational studies reporting on antibiotic resistance in 2319 migrants were included. The pooled prevalence of any AMR carriage or AMR infection in migrants was 25·4% (95% CI 19·1-31·8; I2 =98%), including meticillin-resistant Staphylococcus aureus (7·8%, 4·8-10·7; I2 =92%) and antibiotic-resistant Gram-negative bacteria (27·2%, 17·6-36·8; I2 =94%). The pooled prevalence of any AMR carriage or infection was higher in refugees and asylum seekers (33·0%, 18·3-47·6; I2 =98%) than in other migrant groups (6·6%, 1·8-11·3; I2 =92%). The pooled prevalence of antibiotic-resistant organisms was slightly higher in high-migrant community settings (33·1%, 11·1-55·1; I2 =96%) than in migrants in hospitals (24·3%, 16·1-32·6; I2 =98%). We did not find evidence of high rates of transmission of AMR from migrant to host populations. INTERPRETATION: Migrants are exposed to conditions favouring the emergence of drug resistance during transit and in host countries in Europe. Increased antibiotic resistance among refugees and asylum seekers and in high-migrant community settings (such as refugee camps and detention facilities) highlights the need for improved living conditions, access to health care, and initiatives to facilitate detection of and appropriate high-quality treatment for antibiotic-resistant infections during transit and in host countries. Protocols for the prevention and control of infection and for antibiotic surveillance need to be integrated in all aspects of health care, which should be accessible for all migrant groups, and should target determinants of AMR before, during, and after migration. FUNDING: UK National Institute for Health Research Imperial Biomedical Research Centre, Imperial College Healthcare Charity, the Wellcome Trust, and UK National Institute for Health Research Health Protection Research Unit in Healthcare-associated Infections and Antimictobial Resistance at Imperial College London

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)

    Elective Cancer Surgery in COVID-19-Free Surgical Pathways During the SARS-CoV-2 Pandemic: An International, Multicenter, Comparative Cohort Study.

    Get PDF
    PURPOSE: As cancer surgery restarts after the first COVID-19 wave, health care providers urgently require data to determine where elective surgery is best performed. This study aimed to determine whether COVID-19-free surgical pathways were associated with lower postoperative pulmonary complication rates compared with hospitals with no defined pathway. PATIENTS AND METHODS: This international, multicenter cohort study included patients who underwent elective surgery for 10 solid cancer types without preoperative suspicion of SARS-CoV-2. Participating hospitals included patients from local emergence of SARS-CoV-2 until April 19, 2020. At the time of surgery, hospitals were defined as having a COVID-19-free surgical pathway (complete segregation of the operating theater, critical care, and inpatient ward areas) or no defined pathway (incomplete or no segregation, areas shared with patients with COVID-19). The primary outcome was 30-day postoperative pulmonary complications (pneumonia, acute respiratory distress syndrome, unexpected ventilation). RESULTS: Of 9,171 patients from 447 hospitals in 55 countries, 2,481 were operated on in COVID-19-free surgical pathways. Patients who underwent surgery within COVID-19-free surgical pathways were younger with fewer comorbidities than those in hospitals with no defined pathway but with similar proportions of major surgery. After adjustment, pulmonary complication rates were lower with COVID-19-free surgical pathways (2.2% v 4.9%; adjusted odds ratio [aOR], 0.62; 95% CI, 0.44 to 0.86). This was consistent in sensitivity analyses for low-risk patients (American Society of Anesthesiologists grade 1/2), propensity score-matched models, and patients with negative SARS-CoV-2 preoperative tests. The postoperative SARS-CoV-2 infection rate was also lower in COVID-19-free surgical pathways (2.1% v 3.6%; aOR, 0.53; 95% CI, 0.36 to 0.76). CONCLUSION: Within available resources, dedicated COVID-19-free surgical pathways should be established to provide safe elective cancer surgery during current and before future SARS-CoV-2 outbreaks

    Elective cancer surgery in COVID-19-free surgical pathways during the SARS-CoV-2 pandemic: An international, multicenter, comparative cohort study

    Get PDF
    PURPOSE As cancer surgery restarts after the first COVID-19 wave, health care providers urgently require data to determine where elective surgery is best performed. This study aimed to determine whether COVID-19–free surgical pathways were associated with lower postoperative pulmonary complication rates compared with hospitals with no defined pathway. PATIENTS AND METHODS This international, multicenter cohort study included patients who underwent elective surgery for 10 solid cancer types without preoperative suspicion of SARS-CoV-2. Participating hospitals included patients from local emergence of SARS-CoV-2 until April 19, 2020. At the time of surgery, hospitals were defined as having a COVID-19–free surgical pathway (complete segregation of the operating theater, critical care, and inpatient ward areas) or no defined pathway (incomplete or no segregation, areas shared with patients with COVID-19). The primary outcome was 30-day postoperative pulmonary complications (pneumonia, acute respiratory distress syndrome, unexpected ventilation). RESULTS Of 9,171 patients from 447 hospitals in 55 countries, 2,481 were operated on in COVID-19–free surgical pathways. Patients who underwent surgery within COVID-19–free surgical pathways were younger with fewer comorbidities than those in hospitals with no defined pathway but with similar proportions of major surgery. After adjustment, pulmonary complication rates were lower with COVID-19–free surgical pathways (2.2% v 4.9%; adjusted odds ratio [aOR], 0.62; 95% CI, 0.44 to 0.86). This was consistent in sensitivity analyses for low-risk patients (American Society of Anesthesiologists grade 1/2), propensity score–matched models, and patients with negative SARS-CoV-2 preoperative tests. The postoperative SARS-CoV-2 infection rate was also lower in COVID-19–free surgical pathways (2.1% v 3.6%; aOR, 0.53; 95% CI, 0.36 to 0.76). CONCLUSION Within available resources, dedicated COVID-19–free surgical pathways should be established to provide safe elective cancer surgery during current and before future SARS-CoV-2 outbreaks

    Improved risk stratification of patients with atrial fibrillation: an integrated GARFIELD-AF tool for the prediction of mortality, stroke and bleed in patients with and without anticoagulation.

    Get PDF
    OBJECTIVES: To provide an accurate, web-based tool for stratifying patients with atrial fibrillation to facilitate decisions on the potential benefits/risks of anticoagulation, based on mortality, stroke and bleeding risks. DESIGN: The new tool was developed, using stepwise regression, for all and then applied to lower risk patients. C-statistics were compared with CHA2DS2-VASc using 30-fold cross-validation to control for overfitting. External validation was undertaken in an independent dataset, Outcome Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). PARTICIPANTS: Data from 39 898 patients enrolled in the prospective GARFIELD-AF registry provided the basis for deriving and validating an integrated risk tool to predict stroke risk, mortality and bleeding risk. RESULTS: The discriminatory value of the GARFIELD-AF risk model was superior to CHA2DS2-VASc for patients with or without anticoagulation. C-statistics (95% CI) for all-cause mortality, ischaemic stroke/systemic embolism and haemorrhagic stroke/major bleeding (treated patients) were: 0.77 (0.76 to 0.78), 0.69 (0.67 to 0.71) and 0.66 (0.62 to 0.69), respectively, for the GARFIELD-AF risk models, and 0.66 (0.64-0.67), 0.64 (0.61-0.66) and 0.64 (0.61-0.68), respectively, for CHA2DS2-VASc (or HAS-BLED for bleeding). In very low to low risk patients (CHA2DS2-VASc 0 or 1 (men) and 1 or 2 (women)), the CHA2DS2-VASc and HAS-BLED (for bleeding) scores offered weak discriminatory value for mortality, stroke/systemic embolism and major bleeding. C-statistics for the GARFIELD-AF risk tool were 0.69 (0.64 to 0.75), 0.65 (0.56 to 0.73) and 0.60 (0.47 to 0.73) for each end point, respectively, versus 0.50 (0.45 to 0.55), 0.59 (0.50 to 0.67) and 0.55 (0.53 to 0.56) for CHA2DS2-VASc (or HAS-BLED for bleeding). Upon validation in the ORBIT-AF population, C-statistics showed that the GARFIELD-AF risk tool was effective for predicting 1-year all-cause mortality using the full and simplified model for all-cause mortality: C-statistics 0.75 (0.73 to 0.77) and 0.75 (0.73 to 0.77), respectively, and for predicting for any stroke or systemic embolism over 1 year, C-statistics 0.68 (0.62 to 0.74). CONCLUSIONS: Performance of the GARFIELD-AF risk tool was superior to CHA2DS2-VASc in predicting stroke and mortality and superior to HAS-BLED for bleeding, overall and in lower risk patients. The GARFIELD-AF tool has the potential for incorporation in routine electronic systems, and for the first time, permits simultaneous evaluation of ischaemic stroke, mortality and bleeding risks. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier for GARFIELD-AF (NCT01090362) and for ORBIT-AF (NCT01165710)

    Two-year outcomes of patients with newly diagnosed atrial fibrillation: results from GARFIELD-AF.

    Get PDF
    AIMS: The relationship between outcomes and time after diagnosis for patients with non-valvular atrial fibrillation (NVAF) is poorly defined, especially beyond the first year. METHODS AND RESULTS: GARFIELD-AF is an ongoing, global observational study of adults with newly diagnosed NVAF. Two-year outcomes of 17 162 patients prospectively enrolled in GARFIELD-AF were analysed in light of baseline characteristics, risk profiles for stroke/systemic embolism (SE), and antithrombotic therapy. The mean (standard deviation) age was 69.8 (11.4) years, 43.8% were women, and the mean CHA2DS2-VASc score was 3.3 (1.6); 60.8% of patients were prescribed anticoagulant therapy with/without antiplatelet (AP) therapy, 27.4% AP monotherapy, and 11.8% no antithrombotic therapy. At 2-year follow-up, all-cause mortality, stroke/SE, and major bleeding had occurred at a rate (95% confidence interval) of 3.83 (3.62; 4.05), 1.25 (1.13; 1.38), and 0.70 (0.62; 0.81) per 100 person-years, respectively. Rates for all three major events were highest during the first 4 months. Congestive heart failure, acute coronary syndromes, sudden/unwitnessed death, malignancy, respiratory failure, and infection/sepsis accounted for 65% of all known causes of death and strokes for <10%. Anticoagulant treatment was associated with a 35% lower risk of death. CONCLUSION: The most frequent of the three major outcome measures was death, whose most common causes are not known to be significantly influenced by anticoagulation. This suggests that a more comprehensive approach to the management of NVAF may be needed to improve outcome. This could include, in addition to anticoagulation, interventions targeting modifiable, cause-specific risk factors for death. CLINICAL TRIAL REGISTRATION: http://www.clinicaltrials.gov. Unique identifier: NCT01090362
    corecore