66 research outputs found

    Effects of Blended Fertilizers on Yields of Mature Tea Clones Trfk 6/8 and Bbk 35 Grown in Kenyan Highlands

    Full text link
    Kenya's tea industry depends predominantly on imported NPK fertilizers to replenish nutrients removed through plucking. In this respect, two blended fertilizers containing NPKS 25:5:5:4+9Ca+2.6Mg and NPKS 23:5:5:4+10Ca+3Mg with trace elements have been produced in the country. However, contribution of the blended fertilizers to optimal tea yields had not been determined. The study aimed to evaluate the optimal levels of the two blended fertilizers on tea grown in the highlands of Kenya. The blended fertilizers were evaluated in two sites, i.e. Timbilil estate in Kericho and Kagochi farm in Nyeri. The trial was laid out in a randomized complete block design with two blended fertilizers and the standard NPK 26:5:5 as a control. The treatments were applied at four fertilizer rates (0-control, 75, 150 and 225 kg N ha-1 yr-1), with three replications. The results showed that application of 225 kg N ha-1 yr-1 blended fertilizer NPKS 25:5:5:4+9Ca+2.6Mg in Timbilil produced mean yield of 2,995 kg Mt ha-1 compared with 3,099 kg Mt ha-1 from the standard NPK. In Kagochi, the highest yield was 1,975 kg Mt ha-1 obtained from the application of the same blended fertilizer NPKS 25:5:5:4+9Ca+2.6Mg at 75 kg N ha-1 yr-1. The highest yields in both sites were obtained during a warm-dry season except in 2015-2016. This study concluded that based on the annual and seasonal yields, the two blended fertilizers and the standard type had the same effectiveness, irrespective of clones and sites. However, the fertilizer rates affected the tea yield

    A cost effectiveness and capacity analysis for the introduction of universal rotavirus vaccination in Kenya : comparison between Rotarix and RotaTeq vaccines

    Get PDF
    Background Diarrhoea is an important cause of death in the developing world, and rotavirus is the single most important cause of diarrhoea associated mortality. Two vaccines (Rotarix and RotaTeq) are available to prevent rotavirus disease. This analysis was undertaken to aid the decision in Kenya as to which vaccine to choose when introducing rotavirus vaccination. Methods Cost-effectiveness modelling, using national and sentinel surveillance data, and an impact assessment on the cold chain. Results The median estimated incidence of rotavirus disease in Kenya was 3015 outpatient visits, 279 hospitalisations and 65 deaths per 100,000 children under five years of age per year. Cumulated over the first five years of life vaccination was predicted to prevent 34% of the outpatient visits, 31% of the hospitalizations and 42% of the deaths. The estimated prevented costs accumulated over five years totalled US1,782,761(directandindirectcosts)withanassociated48,585DALYs.FromasocietalperspectiveRotarixhadacosteffectivenessratioofUS1,782,761 (direct and indirect costs) with an associated 48,585 DALYs. From a societal perspective Rotarix had a cost-effectiveness ratio of US142 per DALY (US5forthefullcourseoftwodoses)andRotaTeqUS5 for the full course of two doses) and RotaTeq US288 per DALY ($10.5 for the full course of three doses). RotaTeq will have a bigger impact on the cold chain compared to Rotarix. Conclusion Vaccination against rotavirus disease is cost-effective for Kenya irrespective of the vaccine. Of the two vaccines Rotarix was the preferred choice due to a better cost-effectiveness ratio, the presence of a vaccine vial monitor, the requirement of fewer doses and less storage space, and proven thermo-stability

    Spectral signatures of reorganised brain networks in disorders of consciousness.

    Get PDF
    Theoretical advances in the science of consciousness have proposed that it is concomitant with balanced cortical integration and differentiation, enabled by efficient networks of information transfer across multiple scales. Here, we apply graph theory to compare key signatures of such networks in high-density electroencephalographic data from 32 patients with chronic disorders of consciousness, against normative data from healthy controls. Based on connectivity within canonical frequency bands, we found that patient networks had reduced local and global efficiency, and fewer hubs in the alpha band. We devised a novel topographical metric, termed modular span, which showed that the alpha network modules in patients were also spatially circumscribed, lacking the structured long-distance interactions commonly observed in the healthy controls. Importantly however, these differences between graph-theoretic metrics were partially reversed in delta and theta band networks, which were also significantly more similar to each other in patients than controls. Going further, we found that metrics of alpha network efficiency also correlated with the degree of behavioural awareness. Intriguingly, some patients in behaviourally unresponsive vegetative states who demonstrated evidence of covert awareness with functional neuroimaging stood out from this trend: they had alpha networks that were remarkably well preserved and similar to those observed in the controls. Taken together, our findings inform current understanding of disorders of consciousness by highlighting the distinctive brain networks that characterise them. In the significant minority of vegetative patients who follow commands in neuroimaging tests, they point to putative network mechanisms that could support cognitive function and consciousness despite profound behavioural impairment.This work was supported by grants from the Wellcome Trust [WT093811MA to T.B.]; the James S. McDonnell Foundation [to A.M.O. and J.D.P.]; the UK Medical Research Council [U.1055.01.002.00001.01 to A.M.O. and J.D.P.]; the Canada Excellence Research Chairs program [to A.M.O.]; the National Institute for Health Research Cambridge Biomedical Research Centre [to J.D.P.]; and the National Institute for Health Research Senior Investigator and Healthcare Technology Cooperative awards [to J.D.P.].This is the final version of the article. It first appeared from PLOS via http://dx.doi.org

    The Predictive Performance of a Pneumonia Severity Score in Human Immunodeficiency Virus-negative Children Presenting to Hospital in 7 Low- and Middle-income Countries.

    Get PDF
    BACKGROUND: In 2015, pneumonia remained the leading cause of mortality in children aged 1-59 months. METHODS: Data from 1802 human immunodeficiency virus (HIV)-negative children aged 1-59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) study with severe or very severe pneumonia during 2011-2014 were used to build a parsimonious multivariable model predicting mortality using backwards stepwise logistic regression. The PERCH severity score, derived from model coefficients, was validated on a second, temporally discrete dataset of a further 1819 cases and compared to other available scores using the C statistic. RESULTS: Predictors of mortality, across 7 low- and middle-income countries, were age <1 year, female sex, ≥3 days of illness prior to presentation to hospital, low weight for height, unresponsiveness, deep breathing, hypoxemia, grunting, and the absence of cough. The model discriminated well between those who died and those who survived (C statistic = 0.84), but the predictive capacity of the PERCH 5-stratum score derived from the coefficients was moderate (C statistic = 0.76). The performance of the Respiratory Index of Severity in Children score was similar (C statistic = 0.76). The number of World Health Organization (WHO) danger signs demonstrated the highest discrimination (C statistic = 0.82; 1.5% died if no danger signs, 10% if 1 danger sign, and 33% if ≥2 danger signs). CONCLUSIONS: The PERCH severity score could be used to interpret geographic variations in pneumonia mortality and etiology. The number of WHO danger signs on presentation to hospital could be the most useful of the currently available tools to aid clinical management of pneumonia

    Cognitive Motor Dissociation in Disorders of Consciousness.

    Get PDF
    Patients with brain injury who are unresponsive to commands may perform cognitive tasks that are detected on functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). This phenomenon, known as cognitive motor dissociation, has not been systematically studied in a large cohort of persons with disorders of consciousness. In this prospective cohort study conducted at six international centers, we collected clinical, behavioral, and task-based fMRI and EEG data from a convenience sample of 353 adults with disorders of consciousness. We assessed the response to commands on task-based fMRI or EEG in participants without an observable response to verbal commands (i.e., those with a behavioral diagnosis of coma, vegetative state, or minimally conscious state-minus) and in participants with an observable response to verbal commands. The presence or absence of an observable response to commands was assessed with the use of the Coma Recovery Scale-Revised (CRS-R). Data from fMRI only or EEG only were available for 65% of the participants, and data from both fMRI and EEG were available for 35%. The median age of the participants was 37.9 years, the median time between brain injury and assessment with the CRS-R was 7.9 months (25% of the participants were assessed with the CRS-R within 28 days after injury), and brain trauma was an etiologic factor in 50%. We detected cognitive motor dissociation in 60 of the 241 participants (25%) without an observable response to commands, of whom 11 had been assessed with the use of fMRI only, 13 with the use of EEG only, and 36 with the use of both techniques. Cognitive motor dissociation was associated with younger age, longer time since injury, and brain trauma as an etiologic factor. In contrast, responses on task-based fMRI or EEG occurred in 43 of 112 participants (38%) with an observable response to verbal commands. Approximately one in four participants without an observable response to commands performed a cognitive task on fMRI or EEG as compared with one in three participants with an observable response to commands

    Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study

    Get PDF
    Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world. Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231. Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001). Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication

    Causes of severe pneumonia requiring hospital admission in children without HIV infection from Africa and Asia: the PERCH multi-country case-control study

    Get PDF
    Background Pneumonia is the leading cause of death among children younger than 5 years. In this study, we estimated causes of pneumonia in young African and Asian children, using novel analytical methods applied to clinical and microbiological findings. Methods We did a multi-site, international case-control study in nine study sites in seven countries: Bangladesh, The Gambia, Kenya, Mali, South Africa, Thailand, and Zambia. All sites enrolled in the study for 24 months. Cases were children aged 1–59 months admitted to hospital with severe pneumonia. Controls were age-group-matched children randomly selected from communities surrounding study sites. Nasopharyngeal and oropharyngeal (NP-OP), urine, blood, induced sputum, lung aspirate, pleural fluid, and gastric aspirates were tested with cultures, multiplex PCR, or both. Primary analyses were restricted to cases without HIV infection and with abnormal chest x-rays and to controls without HIV infection. We applied a Bayesian, partial latent class analysis to estimate probabilities of aetiological agents at the individual and population level, incorporating case and control data. Findings Between Aug 15, 2011, and Jan 30, 2014, we enrolled 4232 cases and 5119 community controls. The primary analysis group was comprised of 1769 (41·8% of 4232) cases without HIV infection and with positive chest x-rays and 5102 (99·7% of 5119) community controls without HIV infection. Wheezing was present in 555 (31·7%) of 1752 cases (range by site 10·6–97·3%). 30-day case-fatality ratio was 6·4% (114 of 1769 cases). Blood cultures were positive in 56 (3·2%) of 1749 cases, and Streptococcus pneumoniae was the most common bacteria isolated (19 [33·9%] of 56). Almost all cases (98·9%) and controls (98·0%) had at least one pathogen detected by PCR in the NP-OP specimen. The detection of respiratory syncytial virus (RSV), parainfluenza virus, human metapneumovirus, influenza virus, S pneumoniae, Haemophilus influenzae type b (Hib), H influenzae non-type b, and Pneumocystis jirovecii in NP-OP specimens was associated with case status. The aetiology analysis estimated that viruses accounted for 61·4% (95% credible interval [CrI] 57·3–65·6) of causes, whereas bacteria accounted for 27·3% (23·3–31·6) and Mycobacterium tuberculosis for 5·9% (3·9–8·3). Viruses were less common (54·5%, 95% CrI 47·4–61·5 vs 68·0%, 62·7–72·7) and bacteria more common (33·7%, 27·2–40·8 vs 22·8%, 18·3–27·6) in very severe pneumonia cases than in severe cases. RSV had the greatest aetiological fraction (31·1%, 95% CrI 28·4–34·2) of all pathogens. Human rhinovirus, human metapneumovirus A or B, human parainfluenza virus, S pneumoniae, M tuberculosis, and H influenzae each accounted for 5% or more of the aetiological distribution. We observed differences in aetiological fraction by age for Bordetella pertussis, parainfluenza types 1 and 3, parechovirus–enterovirus, P jirovecii, RSV, rhinovirus, Staphylococcus aureus, and S pneumoniae, and differences by severity for RSV, S aureus, S pneumoniae, and parainfluenza type 3. The leading ten pathogens of each site accounted for 79% or more of the site's aetiological fraction. Interpretation In our study, a small set of pathogens accounted for most cases of pneumonia requiring hospital admission. Preventing and treating a subset of pathogens could substantially affect childhood pneumonia outcomes

    Association of C-reactive protein with bacterial and respiratory syncytial virus-associated pneumonia among children aged <5 years in the PERCH study

    Get PDF
    Background. Lack of a gold standard for identifying bacterial and viral etiologies of pneumonia has limited evaluation of C-reactive protein (CRP) for identifying bacterial pneumonia. We evaluated the sensitivity and specificity of CRP for identifying bacterial vs respiratory syncytial virus (RSV) pneumonia in the Pneumonia Etiology Research for Child Health (PERCH) multicenter case-control study. Methods. We measured serum CRP levels in cases with World Health Organization-defined severe or very severe pneumonia and a subset of community controls. We evaluated the sensitivity and specificity of elevated CRP for "confirmed" bacterial pneumonia (positive blood culture or positive lung aspirate or pleural fluid culture or polymerase chain reaction [PCR]) compared to "RSV pneumonia" (nasopharyngeal/oropharyngeal or induced sputum PCR-positive without confirmed/suspected bacterial pneumonia). Receiver operating characteristic (ROC) curves were constructed to assess the performance of elevated CRP in distinguishing these cases. Results. Among 601 human immunodeficiency virus (HIV)-negative tested controls, 3% had CRP ≥40 mg/L. Among 119 HIVnegative cases with confirmed bacterial pneumonia, 77% had CRP ≥40 mg/L compared with 17% of 556 RSV pneumonia cases. The ROC analysis produced an area under the curve of 0.87, indicating very good discrimination; a cut-point of 37.1 mg/L best discriminated confirmed bacterial pneumonia (sensitivity 77%) from RSV pneumonia (specificity 82%). CRP ≥100 mg/L substantially improved specificity over CRP ≥40 mg/L, though at a loss to sensitivity. Conclusions. Elevated CRP was positively associated with confirmed bacterial pneumonia and negatively associated with RSV pneumonia in PERCH. CRP may be useful for distinguishing bacterial from RSV-associated pneumonia, although its role in discriminating against other respiratory viral-associated pneumonia needs further study

    An open dataset of Plasmodium falciparum genome variation in 7,000 worldwide samples.

    Get PDF
    MalariaGEN is a data-sharing network that enables groups around the world to work together on the genomic epidemiology of malaria. Here we describe a new release of curated genome variation data on 7,000 Plasmodium falciparum samples from MalariaGEN partner studies in 28 malaria-endemic countries. High-quality genotype calls on 3 million single nucleotide polymorphisms (SNPs) and short indels were produced using a standardised analysis pipeline. Copy number variants associated with drug resistance and structural variants that cause failure of rapid diagnostic tests were also analysed.  Almost all samples showed genetic evidence of resistance to at least one antimalarial drug, and some samples from Southeast Asia carried markers of resistance to six commonly-used drugs. Genes expressed during the mosquito stage of the parasite life-cycle are prominent among loci that show strong geographic differentiation. By continuing to enlarge this open data resource we aim to facilitate research into the evolutionary processes affecting malaria control and to accelerate development of the surveillance toolkit required for malaria elimination

    Socializing One Health: an innovative strategy to investigate social and behavioral risks of emerging viral threats

    Get PDF
    In an effort to strengthen global capacity to prevent, detect, and control infectious diseases in animals and people, the United States Agency for International Development’s (USAID) Emerging Pandemic Threats (EPT) PREDICT project funded development of regional, national, and local One Health capacities for early disease detection, rapid response, disease control, and risk reduction. From the outset, the EPT approach was inclusive of social science research methods designed to understand the contexts and behaviors of communities living and working at human-animal-environment interfaces considered high-risk for virus emergence. Using qualitative and quantitative approaches, PREDICT behavioral research aimed to identify and assess a range of socio-cultural behaviors that could be influential in zoonotic disease emergence, amplification, and transmission. This broad approach to behavioral risk characterization enabled us to identify and characterize human activities that could be linked to the transmission dynamics of new and emerging viruses. This paper provides a discussion of implementation of a social science approach within a zoonotic surveillance framework. We conducted in-depth ethnographic interviews and focus groups to better understand the individual- and community-level knowledge, attitudes, and practices that potentially put participants at risk for zoonotic disease transmission from the animals they live and work with, across 6 interface domains. When we asked highly-exposed individuals (ie. bushmeat hunters, wildlife or guano farmers) about the risk they perceived in their occupational activities, most did not perceive it to be risky, whether because it was normalized by years (or generations) of doing such an activity, or due to lack of information about potential risks. Integrating the social sciences allows investigations of the specific human activities that are hypothesized to drive disease emergence, amplification, and transmission, in order to better substantiate behavioral disease drivers, along with the social dimensions of infection and transmission dynamics. Understanding these dynamics is critical to achieving health security--the protection from threats to health-- which requires investments in both collective and individual health security. Involving behavioral sciences into zoonotic disease surveillance allowed us to push toward fuller community integration and engagement and toward dialogue and implementation of recommendations for disease prevention and improved health security
    corecore