115 research outputs found
Climate change impact and adaptation on wheat yield, water use and water use efficiency at North Nile Delta
This is an accepted manuscript of an article published by Springer in Frontiers of Earth Science on 29/04/2020, available online: https://doi.org/10.1007/s11707-019-0806-4
The accepted version of the publication may differ from the final published version.© 2020, Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature. Investigation of climate change impacts on food security has become a global hot spot. Even so, efforts to mitigate these issues in arid regions have been insufficient. Thus, in this paper, further research is discussed based on data obtained from various crop and climate models. Two DSSATcrop models (CMs) (CERESWheat and N-Wheat) were calibrated with two wheat cultivars (Gemiza9 and Misr1). A baseline simulation (1981-2010) was compared with different scenarios of simulations using three Global Climate Models (GCMs) for the 2030s, 2050s and 2080s. Probable impacts of climate change were assessed using the GCMs and CMs under the high emission Representative Concentration Pathway (RCP8.5). Results predicted decreased wheat grain yields by a mean of 8.7%, 11.4% and 13.2% in the 2030s, 2050s and 2080s, respectively, relative to the baseline yield. Negative impacts of climatic change are probable, despite some uncertainties within the GCMs (i. e., 2.1%, 5.0% and 8.0%) and CMs (i.e., 2.2%, 6.0% and 9.2%). Changing the planting date with a scenario of plus or minus 5 or 10 days from the common practice was assessed as a potentially effective adaptation option, which may partially offset the negative impacts of climate change. Delaying the sowing date by 10 days (from 20 November to 30 November) proved the optimum scenario and decreased further reduction in wheat yields resulting from climate change to 5.2%, 6.8% and 8.5% in the 2030s, 2050s and 2080s, respectively, compared with the 20 November scenario. The planting 5-days earlier scenario showed a decreased impact on climate change adaptation. However, the 10-days early planting scenario increased yield reduction under projected climate change. The cultivar Misr1 was more resistant to rising temperature than Gemiza9. Despite the negative impacts of projected climate change on wheat production, water use efficiency would slightly increase. The ensemble of multi-model estimated impacts and adaptation uncertainties of climate change can assist decision-makers in planning climate adaptation strategies.Published versio
The state of ambient air quality in Pakistan—a review
Background and purpose: Pakistan, during the last decade, has seen an extensive escalation in population growth, urbanization, and industrialization, together with a great increase in motorization and energy use. As a result, a substantial rise has taken place in the types and number of emission sources of various air pollutants. However, due to the lack of air quality management capabilities, the country is suffering from deterioration of air quality. Evidence from various governmental organizations and international bodies has indicated that air pollution is a significant risk to the environment, quality of life, and health of the population. The Government has taken positive steps toward air quality management in the form of the Pakistan Clean Air Program and has recently established a small number of continuous monitoring stations. However, ambient air quality standards have not yet been established. This paper reviews the data being available on the criteria air pollutants: particulate matter (PM), sulfur dioxide, ozone, carbon monoxide, nitrogen dioxide, and lead. Methods: Air pollution studies in Pakistan published in both scientific journals and by the Government have been reviewed and the reported concentrations of PM, SO2, O3, CO, NO2, and Pb collated. A comparison of the levels of these air pollutants with the World Health Organization air quality guidelines was carried out. Results: Particulate matter was the most serious air pollutant in the country. NO2 has emerged as the second high-risk pollutant. The reported levels of PM, SO2, CO, NO2, and Pb were many times higher than the World Health Organization air quality guidelines. Only O3 concentrations were below the guidelines. Conclusions: The current state of air quality calls for immediate action to tackle the poor air quality. The establishment of ambient air quality standards, an extension of the continuous monitoring sites, and the development of emission control strategies are essential. © Springer-Verlag 2009
Health sector spending and spending on HIV/AIDS, tuberculosis, and malaria, and development assistance for health: progress towards Sustainable Development Goal 3
Background: Sustainable Development Goal (SDG) 3 aims to “ensure healthy lives and promote well-being for all at all
ages”. While a substantial effort has been made to quantify progress towards SDG3, less research has focused on
tracking spending towards this goal. We used spending estimates to measure progress in financing the priority areas
of SDG3, examine the association between outcomes and financing, and identify where resource gains are most
needed to achieve the SDG3 indicators for which data are available.
Methods: We estimated domestic health spending, disaggregated by source (government, out-of-pocket, and prepaid
private) from 1995 to 2017 for 195 countries and territories. For disease-specific health spending, we estimated
spending for HIV/AIDS and tuberculosis for 135 low-income and middle-income countries, and malaria in
106 malaria-endemic countries, from 2000 to 2017. We also estimated development assistance for health (DAH) from
1990 to 2019, by source, disbursing development agency, recipient, and health focus area, including DAH for
pandemic preparedness. Finally, we estimated future health spending for 195 countries and territories from 2018 until
2030. We report all spending estimates in inflation-adjusted 2019 US7·9 trillion (95% uncertainty interval 7·8–8·0) in 2017 and is expected to increase to 20·2 billion
(17·0–25·0) and on tuberculosis it was 5·1 billion (4·9–5·4). Development assistance for health was 374 million of DAH was provided
for pandemic preparedness, less than 1% of DAH. Although spending has increased across HIV/AIDS, tuberculosis,
and malaria since 2015, spending has not increased in all countries, and outcomes in terms of prevalence, incidence,
and per-capita spending have been mixed. The proportion of health spending from pooled sources is expected to
increase from 81·6% (81·6–81·7) in 2015 to 83·1% (82·8–83·3) in 2030.
Interpretation: Health spending on SDG3 priority areas has increased, but not in all countries, and progress towards
meeting the SDG3 targets has been mixed and has varied by country and by target. The evidence on the scale-up of
spending and improvements in health outcomes suggest a nuanced relationship, such that increases in spending do
not always results in improvements in outcomes. Although countries will probably need more resources to achieve
SDG3, other constraints in the broader health system such as inefficient allocation of resources across interventions
and populations, weak governance systems, human resource shortages, and drug shortages, will also need to be
addressed.
Funding: The Bill & Melinda Gates Foundatio
Global, regional, and national burden of traumatic brain injury and spinal cord injury, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016.
Traumatic brain injury (TBI) and spinal cord injury (SCI) are increasingly recognised as global health priorities in view of the preventability of most injuries and the complex and expensive medical care they necessitate. We aimed to measure the incidence, prevalence, and years of life lived with disability (YLDs) for TBI and SCI from all causes of injury in every country, to describe how these measures have changed between 1990 and 2016, and to estimate the proportion of TBI and SCI cases caused by different types of injury. METHODS: We used results from the Global Burden of Diseases, Injuries, and Risk Factors (GBD) Study 2016 to measure the global, regional, and national burden of TBI and SCI by age and sex. We measured the incidence and prevalence of all causes of injury requiring medical care in inpatient and outpatient records, literature studies, and survey data. By use of clinical record data, we estimated the proportion of each cause of injury that required medical care that would result in TBI or SCI being considered as the nature of injury. We used literature studies to establish standardised mortality ratios and applied differential equations to convert incidence to prevalence of long-term disability. Finally, we applied GBD disability weights to calculate YLDs. We used a Bayesian meta-regression tool for epidemiological modelling, used cause-specific mortality rates for non-fatal estimation, and adjusted our results for disability experienced with comorbid conditions. We also analysed results on the basis of the Socio-demographic Index, a compound measure of income per capita, education, and fertility. FINDINGS: In 2016, there were 27·08 million (95% uncertainty interval [UI] 24·30-30·30 million) new cases of TBI and 0·93 million (0·78-1·16 million) new cases of SCI, with age-standardised incidence rates of 369 (331-412) per 100 000 population for TBI and 13 (11-16) per 100 000 for SCI. In 2016, the number of prevalent cases of TBI was 55·50 million (53·40-57·62 million) and of SCI was 27·04 million (24·98-30·15 million). From 1990 to 2016, the age-standardised prevalence of TBI increased by 8·4% (95% UI 7·7 to 9·2), whereas that of SCI did not change significantly (-0·2% [-2·1 to 2·7]). Age-standardised incidence rates increased by 3·6% (1·8 to 5·5) for TBI, but did not change significantly for SCI (-3·6% [-7·4 to 4·0]). TBI caused 8·1 million (95% UI 6·0-10·4 million) YLDs and SCI caused 9·5 million (6·7-12·4 million) YLDs in 2016, corresponding to age-standardised rates of 111 (82-141) per 100 000 for TBI and 130 (90-170) per 100 000 for SCI. Falls and road injuries were the leading causes of new cases of TBI and SCI in most regions. INTERPRETATION: TBI and SCI constitute a considerable portion of the global injury burden and are caused primarily by falls and road injuries. The increase in incidence of TBI over time might continue in view of increases in population density, population ageing, and increasing use of motor vehicles, motorcycles, and bicycles. The number of individuals living with SCI is expected to increase in view of population growth, which is concerning because of the specialised care that people with SCI can require. Our study was limited by data sparsity in some regions, and it will be important to invest greater resources in collection of data for TBI and SCI to improve the accuracy of future assessments
Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study
Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world.
Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231.
Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001).
Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication
Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.
BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112
Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy
Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe
Worldwide trends in body-mass index, underweight, overweight, and obesity from 1975 to 2016: a pooled analysis of 2416 population-based measurement studies in 128·9 million children, adolescents, and adults.
BACKGROUND: Underweight, overweight, and obesity in childhood and adolescence are associated with adverse health consequences throughout the life-course. Our aim was to estimate worldwide trends in mean body-mass index (BMI) and a comprehensive set of BMI categories that cover underweight to obesity in children and adolescents, and to compare trends with those of adults. METHODS: We pooled 2416 population-based studies with measurements of height and weight on 128·9 million participants aged 5 years and older, including 31·5 million aged 5-19 years. We used a Bayesian hierarchical model to estimate trends from 1975 to 2016 in 200 countries for mean BMI and for prevalence of BMI in the following categories for children and adolescents aged 5-19 years: more than 2 SD below the median of the WHO growth reference for children and adolescents (referred to as moderate and severe underweight hereafter), 2 SD to more than 1 SD below the median (mild underweight), 1 SD below the median to 1 SD above the median (healthy weight), more than 1 SD to 2 SD above the median (overweight but not obese), and more than 2 SD above the median (obesity). FINDINGS: Regional change in age-standardised mean BMI in girls from 1975 to 2016 ranged from virtually no change (-0·01 kg/m2 per decade; 95% credible interval -0·42 to 0·39, posterior probability [PP] of the observed decrease being a true decrease=0·5098) in eastern Europe to an increase of 1·00 kg/m2 per decade (0·69-1·35, PP>0·9999) in central Latin America and an increase of 0·95 kg/m2 per decade (0·64-1·25, PP>0·9999) in Polynesia and Micronesia. The range for boys was from a non-significant increase of 0·09 kg/m2 per decade (-0·33 to 0·49, PP=0·6926) in eastern Europe to an increase of 0·77 kg/m2 per decade (0·50-1·06, PP>0·9999) in Polynesia and Micronesia. Trends in mean BMI have recently flattened in northwestern Europe and the high-income English-speaking and Asia-Pacific regions for both sexes, southwestern Europe for boys, and central and Andean Latin America for girls. By contrast, the rise in BMI has accelerated in east and south Asia for both sexes, and southeast Asia for boys. Global age-standardised prevalence of obesity increased from 0·7% (0·4-1·2) in 1975 to 5·6% (4·8-6·5) in 2016 in girls, and from 0·9% (0·5-1·3) in 1975 to 7·8% (6·7-9·1) in 2016 in boys; the prevalence of moderate and severe underweight decreased from 9·2% (6·0-12·9) in 1975 to 8·4% (6·8-10·1) in 2016 in girls and from 14·8% (10·4-19·5) in 1975 to 12·4% (10·3-14·5) in 2016 in boys. Prevalence of moderate and severe underweight was highest in India, at 22·7% (16·7-29·6) among girls and 30·7% (23·5-38·0) among boys. Prevalence of obesity was more than 30% in girls in Nauru, the Cook Islands, and Palau; and boys in the Cook Islands, Nauru, Palau, Niue, and American Samoa in 2016. Prevalence of obesity was about 20% or more in several countries in Polynesia and Micronesia, the Middle East and north Africa, the Caribbean, and the USA. In 2016, 75 (44-117) million girls and 117 (70-178) million boys worldwide were moderately or severely underweight. In the same year, 50 (24-89) million girls and 74 (39-125) million boys worldwide were obese. INTERPRETATION: The rising trends in children's and adolescents' BMI have plateaued in many high-income countries, albeit at high levels, but have accelerated in parts of Asia, with trends no longer correlated with those of adults. FUNDING: Wellcome Trust, AstraZeneca Young Health Programme
The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study
AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease
Determinants of recovery from post-COVID-19 dyspnoea: analysis of UK prospective cohorts of hospitalised COVID-19 patients and community-based controls
Background The risk factors for recovery from COVID-19 dyspnoea are poorly understood. We investigated determinants of recovery from dyspnoea in adults with COVID-19 and compared these to determinants of recovery from non-COVID-19 dyspnoea. Methods We used data from two prospective cohort studies: PHOSP-COVID (patients hospitalised between March 2020 and April 2021 with COVID-19) and COVIDENCE UK (community cohort studied over the same time period). PHOSP-COVID data were collected during hospitalisation and at 5-month and 1-year follow-up visits. COVIDENCE UK data were obtained through baseline and monthly online questionnaires. Dyspnoea was measured in both cohorts with the Medical Research Council Dyspnoea Scale. We used multivariable logistic regression to identify determinants associated with a reduction in dyspnoea between 5-month and 1-year follow-up. Findings We included 990 PHOSP-COVID and 3309 COVIDENCE UK participants. We observed higher odds of improvement between 5-month and 1-year follow-up among PHOSP-COVID participants who were younger (odds ratio 1.02 per year, 95% CI 1.01–1.03), male (1.54, 1.16–2.04), neither obese nor severely obese (1.82, 1.06–3.13 and 4.19, 2.14–8.19, respectively), had no pre-existing anxiety or depression (1.56, 1.09–2.22) or cardiovascular disease (1.33, 1.00–1.79), and shorter hospital admission (1.01 per day, 1.00–1.02). Similar associations were found in those recovering from non-COVID-19 dyspnoea, excluding age (and length of hospital admission). Interpretation Factors associated with dyspnoea recovery at 1-year post-discharge among patients hospitalised with COVID-19 were similar to those among community controls without COVID-19. Funding PHOSP-COVID is supported by a grant from the MRC-UK Research and Innovation and the Department of Health and Social Care through the National Institute for Health Research (NIHR) rapid response panel to tackle COVID-19. The views expressed in the publication are those of the author(s) and not necessarily those of the National Health Service (NHS), the NIHR or the Department of Health and Social Care. COVIDENCE UK is supported by the UK Research and Innovation, the National Institute for Health Research, and Barts Charity. The views expressed are those of the authors and not necessarily those of the funders
- …