88 research outputs found

    Dynamics of Charcoal Alteration in a Tropical Biome: A Biochar-Based Study

    Get PDF
    Pyrogenic carbon (PyC) is a polyaromatic residue of the incomplete combustion of biomass or fossil fuels. There is a growing recognition that PyC forms an important part of carbon budgets, due to production rates of 116–385 Tg C yr, and the size and ubiquity of PyC stocks in global carbon reservoirs. At least a proportion of PyC exists in a highly recalcitrant chemical form, raising the prospect of long-term carbon sequestration through soil amendment with “biochar,” which is generally produced with the aim of making a particularly recalcitrant form of PyC. However, there is growing evidence that some PyC, including biochar, can be both physically and chemically altered and degraded upon exposure to the environment over annual timescales, yet there is a lack of information concerning the mechanisms and determining factors of degradation. Here, we investigate three main factors; production temperature, feedstock composition, and the characteristics of the environment to which the material is exposed (e.g., pH, organic matter composition, oxygen availability) by analysis of biochar samples in a litterbag experiment before and after a year-long field study in the tropical rainforests of northeast Australia. We find that non-lignocellulosic feedstock has lower aromaticity, plus lower O/C and H/C ratios for a given temperature, and consequently lower carbon sequestration potential. The rate at which samples are altered is production temperature-dependant; however even in the highest temperature samples loss of the semi-labile aromatic carbon component is observed over 1 year. The results of 13C-MAS-NMR measurements suggest that direct oxygenation of aromatic structures may be even more important than carboxylation in environmental alteration of biochar (as a subset of PyC). There is a clear effect of depositional environment on biochar alteration even after the relatively short timescale of this study, as changes are most extensive in the most oxygenated material that was exposed on the soil surface. This is most likely the result of mineral ingress and colonization by soil microbiota. Consequently, oxygen availability and physical or chemical protection from sunlight and/or rainwater is vital in determining the alteration trajectory of this material

    Using an online survey of healthcare-seeking behaviour to estimate the magnitude and severity of the 2009 H1N1v influenza epidemic in England

    Get PDF
    Background : During the 2009 H1N1v influenza epidemic, the total number of symptomatic cases was estimated by combining influenza-like illness (ILI) consultations, virological surveillance and assumptions about healthcare-seeking behaviour. Changes in healthcare-seeking behaviour due to changing scientific information, media coverage and public anxiety, were not included in case estimates. The purpose of the study was to improve estimates of the number of symptomatic H1N1v cases and the case fatality rate (CFR) in England by quantifying healthcare-seeking behaviour using an internet-based survey carried out during the course of the 2009 H1N1v influenza epidemic. Methods : We used an online survey that ran continuously from July 2009 to March 2010 to estimate the proportion of ILI cases that sought healthcare during the 2009 H1N1v influenza epidemic. We used dynamic age- and gender-dependent measures of healthcare-seeking behaviour to re-interpret consultation numbers and estimate the true number of cases of symptomatic ILI in 2009 and the case fatality rate (CFR). Results : There were significant differences between age groups in healthcare usage. From the start to the end of the epidemic, the percentage of individuals with influenza-like symptoms who sought medical attention decreased from 43% to 32% (p < 0.0001). Adjusting official numbers accordingly, we estimate that there were 1.1 million symptomatic cases in England, over 320,000 (40%) more cases than previously estimated and that the autumn epidemic wave was 45% bigger than previously thought. Combining symptomatic case numbers with reported deaths leads to a reduced overall CFR estimate of 17 deaths per 100,000 cases, with the largest reduction in adults. Conclusions : Active surveillance of healthcare-seeking behaviour, which can be achieved using novel data collection methods, is vital for providing accurate real-time estimates of epidemic size and disease severity. The differences in healthcare-seeking between different population groups and changes over time have significant implications for estimates of total case numbers and the case fatality rate

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Seroprevalence and Severity of 2009 Pandemic Influenza A H1N1 in Taiwan

    Get PDF
    BACKGROUND: This study is to determine the seroprevalence of the pandemic influenza A H1N1 virus (pH1N1) in Taiwan before and after the 2009 pandemic, and to estimate the relative severity of pH1N1 infections among different age groups. METHODOLOGY/PRINCIPAL FINDINGS: A total of 1544 and 1558 random serum samples were collected from the general population in Taiwan in 2007 and 2010, respectively. Seropositivity was defined by a hemagglutination inhibition titer to pH1N1 (A/Taiwan/126/09) ≥1:40. The seropositivity rate of pH1N1 among the unvaccinated subjects and national surveillance data were used to compare the proportion of infections that led to severe diseases and fatalities among different age groups. The overall seroprevalence of pH1N1 was 0.91% (95% confidence interval [CI] 0.43-1.38) in 2007 and significantly increased to 29.9% (95% CI 27.6-32.2) in 2010 (p<0.0001), with the peak attack rate (55.4%) in 10-17 year-old adolescents, the lowest in elderly ≥65 years (14.1%). The overall attack rates were 20.6% (188/912) in unvaccinated subjects. Among the unvaccinated but infected populations, the estimated attack rates of severe cases per 100,000 infections were significantly higher in children aged 0-5 years (54.9 cases, odds ratio [OR] 4.23, 95% CI 3.04-5.90) and elderly ≥ 65 years (22.4 cases, OR 2.76, 95% CI 1.99-3.83) compared to adolescents aged 10-17 years (13.0 cases). The overall case-fatality rate was 0.98 per 100,000 infections without a significant difference in different age groups. CONCLUSIONS/SIGNIFICANCE: Pre-existing immunity against pH1N1 was rarely identified in Taiwanese at any age in 2007. Young children and elderly--the two most lower seroprotection groups showed the greatest vulnerability to clinical severity after the pH1N1 infections. These results imply that both age groups should have higher priority for immunization in the coming flu season

    The community impact of the 2009 influenza pandemic in the WHO European Region: a comparison with historical seasonal data from 28 countries

    Get PDF
    Contains fulltext : 109779.pdf (publisher's version ) (Open Access)BACKGROUND: The world has recently experienced the first influenza pandemic of the 21st century that lasted 14 months from June 2009 to August 2010. This study aimed to compare the timing, geographic spread and community impact during the winter wave of influenza pandemic A (H1N1) 2009 to historical influenza seasons in countries of the WHO European region. METHODS: We assessed the timing of pandemic by comparing the median peak of influenza activity in countries of the region during the last seven influenza seasons. The peaks of influenza activity were selected by two independent researchers using predefined rules. The geographic spread was assessed by correlating the peak week of influenza activity in included countries against the longitude and latitude of the central point in each country. To assess the community impact of pandemic influenza, we constructed linear regression models to compare the total and age-specific influenza-like-illness (ILI) or acute respiratory infection (ARI) rates reported by the countries in the pandemic season to those observed in the previous six influenza seasons. RESULTS: We found that the influenza activity reached its peak during the pandemic, on average, 10.5 weeks (95% CI 6.4-14.2) earlier than during the previous 6 seasons in the Region, and there was a west to east spread of pandemic A(H1N1) influenza virus in the western part of the Region. A regression analysis showed that the total ILI or ARI rates were not higher than historical rates in 19 of the 28 countries. However, in countries with age-specific data, there were significantly higher consultation rates in the 0-4 and/or 5-14 age groups in 11 of the 20 countries. CONCLUSIONS: Using routine influenza surveillance data, we found that pandemic influenza had several differential features compared to historical seasons in the region. It arrived earlier, caused significantly higher number of outpatient consultations in children in most countries and followed west to east spread that was previously observed during some influenza seasons with dominant A (H3N2) ifluenza viruses. The results of this study help to understand the epidemiology of 2009 influenza pandemic and can be used for pandemic preparedness planning

    HIV-1 drug resistance mutations emerging on darunavir therapy in PI-naive and -experienced patients in the UK

    Get PDF
    \ua9 The Author 2016. Background: Darunavir is considered to have a high genetic barrier to resistance. Most darunavir-associated drug resistance mutations (DRMs) have been identified through correlation of baseline genotype with virological response in clinical trials. However, there is little information on DRMs that are directly selected by darunavir in clinical settings. Objectives: We examined darunavir DRMs emerging in clinical practice in the UK. Patients and methods: Baseline and post-exposure protease genotypes were compared for individuals in the UK Collaborative HIV Cohort Study who had received darunavir; analyses were stratified for PI history. A selection analysis was used to compare the evolution of subtype B proteases in darunavir recipients and matched PInaive controls. Results: Of 6918 people who had received darunavir, 386 had resistance tests pre- and post-exposure. Overall, 2.8% (11/386) of these participants developed emergent darunavir DRMs. The prevalence of baseline DRMs was 1.0% (2/198) among PI-naive participants and 13.8% (26/188) among PI-experienced participants. Emergent DRMs developed in 2.0% of the PI-naive group (4 mutations) and 3.7% of the PI-experienced group (12 mutations). Codon 77 was positively selected in the PI-naive darunavir cases, but not in the control group. Conclusions: Our findings suggest that although emergent darunavir resistance is rare, it may be more common among PI-experienced patients than those who are PI-naive. Further investigation is required to explore whether codon 77 is a novel site involved in darunavir susceptibility

    Virological failure and development of new resistance mutations according to CD4 count at combination antiretroviral therapy initiation

    Get PDF
    Objectives: No randomized controlled trials have yet reported an individual patient benefit of initiating combination antiretroviral therapy (cART) at CD4 counts > 350 cells/μL. It is hypothesized that earlier initiation of cART in asymptomatic and otherwise healthy individuals may lead to poorer adherence and subsequently higher rates of resistance development. Methods: In a large cohort of HIV-positive individuals, we investigated the emergence of new resistance mutations upon virological treatment failure according to the CD4 count at the initiation of cART. Results: Of 7918 included individuals, 6514 (82.3%), 996 (12.6%) and 408 (5.2%) started cART with a CD4 count ≤ 350, 351-499 and ≥ 500 cells/μL, respectively. Virological rebound occurred while on cART in 488 (7.5%), 46 (4.6%) and 30 (7.4%) with a baseline CD4 count ≤ 350, 351-499 and ≥ 500 cells/μL, respectively. Only four (13.0%) individuals with a baseline CD4 count > 350 cells/μL in receipt of a resistance test at viral load rebound were found to have developed new resistance mutations. This compared to 107 (41.2%) of those with virological failure who had initiated cART with a CD4 count < 350 cells/μL. Conclusions: We found no evidence of increased rates of resistance development when cART was initiated at CD4 counts above 350 cells/μL. HIV Medicin

    Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.

    Get PDF
    BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe
    corecore