66 research outputs found

    Increasing understanding of the relationship between geographic access and gendered decision-making power for treatment-seeking for febrile children in the Chikwawa district of Malawi

    Get PDF
    Background: This study used qualitative methods to investigate the relationship between geographic access and gendered intra-household hierarchies and how these influence treatment-seeking decision-making for childhood fever within the Chikwawa district of Malawi. Previous cross-sectional survey findings in the district indicated that distance from facility and associated costs are important determinants of health facility attendance in the district. This paper uses qualitative data to add depth of understanding to these findings by exploring the relationship between distance from services, anticipated costs and cultural norms of intra-household decision-making, and to identify potential intervention opportunities to reduce challenges experienced by those in remote locations. Qualitative data collection included 12 focus group discussions and 22 critical incident interviews conducted in the local language, with primary caregivers of children who had recently experienced a febrile episode. Results: Low geographic accessibility to facilities inhibited care-seeking, sometimes by extending the ‘assessment period’ for a child’s illness episode, and led to delays in seeking formal treatment, particularly when the illness occurred at night. Although carers attempted to avoid incurring costs, cash was often needed for transport and food. Whilst in all communities fathers were normatively responsible for treatment costs, mothers generally had greater access to and control over resources and autonomy in decision-making in the matrilineal and matrilocal communities in the central part of the district, which were also closer to formal facilities. Conclusions: This study illustrates the complex interplay between geographic access and gender dynamics in shaping decisions on whether and when formal treatment is sought for febrile children in Chikwawa District. Geographic marginality and cultural norms intersect in remote areas both to increase the logistical and anticipated financial barriers to utilising services and to reduce caretakers’ autonomy to act quickly once they recognize the need for formal care. Health education campaigns should be based within communities, engaging all involved in treatment-seeking decision-making, including men and grandmothers, and should aim to promote the ability of junior women to influence the treatment-seeking process. Both mothers’ financial autonomy and fathers financial contributions are important to enable timely access to effective healthcare for children with malaria

    High Efficiency Colloidal Quantum Dot Infrared Light Emitting Diodes via Engineering at the Supra-Nanocrystalline Level

    Get PDF
    Colloidal quantum dot (CQD) light-emitting diodes (LEDs) deliver a compelling performance in the visible, yet infrared CQD LEDs underperform their visible-emitting counterparts, largely due to their low photoluminescence quantum efficiency. Here we employ a ternary blend of CQD thin film that comprises a binary host matrix that serves to electronically passivate as well as to cater for an efficient and balanced carrier supply to the emitting quantum dot species. In doing so, we report infrared PbS CQD LEDs with an external quantum efficiency of ~7.9% and a power conversion efficiency of ~9.3%, thanks to their very low density of trap states, on the order of 1014 cm−3, and very high photoluminescence quantum efficiency in electrically conductive quantum dot solids of more than 60%. When these blend devices operate as solar cells they deliver an open circuit voltage that approaches their radiative limit thanks to the synergistic effect of the reduced trap-state density and the density of state modification in the nanocomposite.Peer ReviewedPostprint (author's final draft

    Structure-Based Predictive Models for Allosteric Hot Spots

    Get PDF
    In allostery, a binding event at one site in a protein modulates the behavior of a distant site. Identifying residues that relay the signal between sites remains a challenge. We have developed predictive models using support-vector machines, a widely used machine-learning method. The training data set consisted of residues classified as either hotspots or non-hotspots based on experimental characterization of point mutations from a diverse set of allosteric proteins. Each residue had an associated set of calculated features. Two sets of features were used, one consisting of dynamical, structural, network, and informatic measures, and another of structural measures defined by Daily and Gray [1]. The resulting models performed well on an independent data set consisting of hotspots and non-hotspots from five allosteric proteins. For the independent data set, our top 10 models using Feature Set 1 recalled 68–81% of known hotspots, and among total hotspot predictions, 58–67% were actual hotspots. Hence, these models have precision P = 58–67% and recall R = 68–81%. The corresponding models for Feature Set 2 had P = 55–59% and R = 81–92%. We combined the features from each set that produced models with optimal predictive performance. The top 10 models using this hybrid feature set had R = 73–81% and P = 64–71%, the best overall performance of any of the sets of models. Our methods identified hotspots in structural regions of known allosteric significance. Moreover, our predicted hotspots form a network of contiguous residues in the interior of the structures, in agreement with previous work. In conclusion, we have developed models that discriminate between known allosteric hotspots and non-hotspots with high accuracy and sensitivity. Moreover, the pattern of predicted hotspots corresponds to known functional motifs implicated in allostery, and is consistent with previous work describing sparse networks of allosterically important residues

    Approaches in biotechnological applications of natural polymers

    Get PDF
    Natural polymers, such as gums and mucilage, are biocompatible, cheap, easily available and non-toxic materials of native origin. These polymers are increasingly preferred over synthetic materials for industrial applications due to their intrinsic properties, as well as they are considered alternative sources of raw materials since they present characteristics of sustainability, biodegradability and biosafety. As definition, gums and mucilages are polysaccharides or complex carbohydrates consisting of one or more monosaccharides or their derivatives linked in bewildering variety of linkages and structures. Natural gums are considered polysaccharides naturally occurring in varieties of plant seeds and exudates, tree or shrub exudates, seaweed extracts, fungi, bacteria, and animal sources. Water-soluble gums, also known as hydrocolloids, are considered exudates and are pathological products; therefore, they do not form a part of cell wall. On the other hand, mucilages are part of cell and physiological products. It is important to highlight that gums represent the largest amounts of polymer materials derived from plants. Gums have enormously large and broad applications in both food and non-food industries, being commonly used as thickening, binding, emulsifying, suspending, stabilizing agents and matrices for drug release in pharmaceutical and cosmetic industries. In the food industry, their gelling properties and the ability to mold edible films and coatings are extensively studied. The use of gums depends on the intrinsic properties that they provide, often at costs below those of synthetic polymers. For upgrading the value of gums, they are being processed into various forms, including the most recent nanomaterials, for various biotechnological applications. Thus, the main natural polymers including galactomannans, cellulose, chitin, agar, carrageenan, alginate, cashew gum, pectin and starch, in addition to the current researches about them are reviewed in this article.. }To the Conselho Nacional de Desenvolvimento Cientfíico e Tecnológico (CNPq) for fellowships (LCBBC and MGCC) and the Coordenação de Aperfeiçoamento de Pessoal de Nvíel Superior (CAPES) (PBSA). This study was supported by the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit, the Project RECI/BBB-EBI/0179/2012 (FCOMP-01-0124-FEDER-027462) and COMPETE 2020 (POCI-01-0145-FEDER-006684) (JAT)

    Mapping geographical inequalities in access to drinking water and sanitation facilities in low-income and middle-income countries, 2000-17

    Get PDF
    Background Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high-resolution geospatial estimates of access to drinking water and sanitation facilities. Methods We used a Bayesian geostatistical model and data from 600 sources across more than 88 low-income and middle-income countries (LMICs) to estimate access to drinking water and sanitation facilities on continuous continent-wide surfaces from 2000 to 2017, and aggregated results to policy-relevant administrative units. We estimated mutually exclusive and collectively exhaustive subcategories of facilities for drinking water (piped water on or off premises, other improved facilities, unimproved, and surface water) and sanitation facilities (septic or sewer sanitation, other improved, unimproved, and open defecation) with use of ordinal regression. We also estimated the number of diarrhoeal deaths in children younger than 5 years attributed to unsafe facilities and estimated deaths that were averted by increased access to safe facilities in 2017, and analysed geographical inequality in access within LMICs. Findings Across LMICs, access to both piped water and improved water overall increased between 2000 and 2017, with progress varying spatially. For piped water, the safest water facility type, access increased from 40.0% (95% uncertainty interval [UI] 39.4-40.7) to 50.3% (50.0-50.5), but was lowest in sub-Saharan Africa, where access to piped water was mostly concentrated in urban centres. Access to both sewer or septic sanitation and improved sanitation overall also increased across all LMICs during the study period. For sewer or septic sanitation, access was 46.3% (95% UI 46.1-46.5) in 2017, compared with 28.7% (28.5-29.0) in 2000. Although some units improved access to the safest drinking water or sanitation facilities since 2000, a large absolute number of people continued to not have access in several units with high access to such facilities (>80%) in 2017. More than 253 000 people did not have access to sewer or septic sanitation facilities in the city of Harare, Zimbabwe, despite 88.6% (95% UI 87.2-89.7) access overall. Many units were able to transition from the least safe facilities in 2000 to safe facilities by 2017; for units in which populations primarily practised open defecation in 2000, 686 (95% UI 664-711) of the 1830 (1797-1863) units transitioned to the use of improved sanitation. Geographical disparities in access to improved water across units decreased in 76.1% (95% UI 71.6-80.7) of countries from 2000 to 2017, and in 53.9% (50.6-59.6) of countries for access to improved sanitation, but remained evident subnationally in most countries in 2017. Interpretation Our estimates, combined with geospatial trends in diarrhoeal burden, identify where efforts to increase access to safe drinking water and sanitation facilities are most needed. By highlighting areas with successful approaches or in need of targeted interventions, our estimates can enable precision public health to effectively progress towards universal access to safe water and sanitation. Copyright (C) 2020 The Author(s). Published by Elsevier Ltd.Peer reviewe

    Global burden of 369 diseases and injuries in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF

    Five insights from the Global Burden of Disease Study 2019

    Get PDF
    The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 provides a rules-based synthesis of the available evidence on levels and trends in health outcomes, a diverse set of risk factors, and health system responses. GBD 2019 covered 204 countries and territories, as well as first administrative level disaggregations for 22 countries, from 1990 to 2019. Because GBD is highly standardised and comprehensive, spanning both fatal and non-fatal outcomes, and uses a mutually exclusive and collectively exhaustive list of hierarchical disease and injury causes, the study provides a powerful basis for detailed and broad insights on global health trends and emerging challenges. GBD 2019 incorporates data from 281 586 sources and provides more than 3.5 billion estimates of health outcome and health system measures of interest for global, national, and subnational policy dialogue. All GBD estimates are publicly available and adhere to the Guidelines on Accurate and Transparent Health Estimate Reporting. From this vast amount of information, five key insights that are important for health, social, and economic development strategies have been distilled. These insights are subject to the many limitations outlined in each of the component GBD capstone papers.Peer reviewe

    Glycogen metabolism has a key role in the cancer microenvironment and provides new targets for cancer therapy

    Get PDF

    Global patterns in monthly activity of influenza virus, respiratory syncytial virus, parainfluenza virus, and metapneumovirus: a systematic analysis

    Get PDF
    Background Influenza virus, respiratory syncytial virus, parainfluenza virus, and metapneumovirus are the most common viruses associated with acute lower respiratory infections in young children (= 65 years). A global report of the monthly activity of these viruses is needed to inform public health strategies and programmes for their control.Methods In this systematic analysis, we compiled data from a systematic literature review of studies published between Jan 1, 2000, and Dec 31, 2017; online datasets; and unpublished research data. Studies were eligible for inclusion if they reported laboratory-confirmed incidence data of human infection of influenza virus, respiratory syncytial virus, parainfluenza virus, or metapneumovirus, or a combination of these, for at least 12 consecutive months (or 52 weeks equivalent); stable testing practice throughout all years reported; virus results among residents in well-defined geographical locations; and aggregated virus results at least on a monthly basis. Data were extracted through a three-stage process, from which we calculated monthly annual average percentage (AAP) as the relative strength of virus activity. We defined duration of epidemics as the minimum number of months to account for 75% of annual positive samples, with each component month defined as an epidemic month. Furthermore, we modelled monthly AAP of influenza virus and respiratory syncytial virus using site-specific temperature and relative humidity for the prediction of local average epidemic months. We also predicted global epidemic months of influenza virus and respiratory syncytial virus on a 5 degrees by 5 degrees grid. The systematic review in this study is registered with PROSPERO, number CRD42018091628.Findings We initally identified 37 335 eligible studies. Of 21 065 studies remaining after exclusion of duplicates, 1081 full-text articles were assessed for eligibility, of which 185 were identified as eligible. We included 246 sites for influenza virus, 183 sites for respiratory syncytial virus, 83 sites for parainfluenza virus, and 65 sites for metapneumovirus. Influenza virus had clear seasonal epidemics in winter months in most temperate sites but timing of epidemics was more variable and less seasonal with decreasing distance from the equator. Unlike influenza virus, respiratory syncytial virus had clear seasonal epidemics in both temperate and tropical regions, starting in late summer months in the tropics of each hemisphere, reaching most temperate sites in winter months. In most temperate sites, influenza virus epidemics occurred later than respiratory syncytial virus (by 0.3 months [95% CI -0.3 to 0.9]) while no clear temporal order was observed in the tropics. Parainfluenza virus epidemics were found mostly in spring and early summer months in each hemisphere. Metapneumovirus epidemics occurred in late winter and spring in most temperate sites but the timing of epidemics was more diverse in the tropics. Influenza virus epidemics had shorter duration (3.8 months [3.6 to 4.0]) in temperate sites and longer duration (5.2 months [4.9 to 5.5]) in the tropics. Duration of epidemics was similar across all sites for respiratory syncytial virus (4.6 months [4.3 to 4.8]), as it was for metapneumovirus (4.8 months [4.4 to 5.1]). By comparison, parainfluenza virus had longer duration of epidemics (6.3 months [6.0 to 6.7]). Our model had good predictability in the average epidemic months of influenza virus in temperate regions and respiratory syncytial virus in both temperate and tropical regions. Through leave-one-out cross validation, the overall prediction error in the onset of epidemics was within 1 month (influenza virus -0.2 months [-0.6 to 0.1]; respiratory syncytial virus 0.1 months [-0.2 to 0.4]).Interpretation This study is the first to provide global representations of month-by-month activity of influenza virus, respiratory syncytial virus, parainfluenza virus, and metapneumovirus. Our model is helpful in predicting the local onset month of influenza virus and respiratory syncytial virus epidemics. The seasonality information has important implications for health services planning, the timing of respiratory syncytial virus passive prophylaxis, and the strategy of influenza virus and future respiratory syncytial virus vaccination. Copyright (C) 2019 The Author(s). Published by Elsevier Ltd

    Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Achieving universal health coverage (UHC) involves all people receiving the health services they need, of high quality, without experiencing financial hardship. Making progress towards UHC is a policy priority for both countries and global institutions, as highlighted by the agenda of the UN Sustainable Development Goals (SDGs) and WHO's Thirteenth General Programme of Work (GPW13). Measuring effective coverage at the health-system level is important for understanding whether health services are aligned with countries' health profiles and are of sufficient quality to produce health gains for populations of all ages. Methods Based on the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, we assessed UHC effective coverage for 204 countries and territories from 1990 to 2019. Drawing from a measurement framework developed through WHO's GPW13 consultation, we mapped 23 effective coverage indicators to a matrix representing health service types (eg, promotion, prevention, and treatment) and five population-age groups spanning from reproductive and newborn to older adults (≥65 years). Effective coverage indicators were based on intervention coverage or outcome-based measures such as mortality-to-incidence ratios to approximate access to quality care; outcome-based measures were transformed to values on a scale of 0–100 based on the 2·5th and 97·5th percentile of location-year values. We constructed the UHC effective coverage index by weighting each effective coverage indicator relative to its associated potential health gains, as measured by disability-adjusted life-years for each location-year and population-age group. For three tests of validity (content, known-groups, and convergent), UHC effective coverage index performance was generally better than that of other UHC service coverage indices from WHO (ie, the current metric for SDG indicator 3.8.1 on UHC service coverage), the World Bank, and GBD 2017. We quantified frontiers of UHC effective coverage performance on the basis of pooled health spending per capita, representing UHC effective coverage index levels achieved in 2019 relative to country-level government health spending, prepaid private expenditures, and development assistance for health. To assess current trajectories towards the GPW13 UHC billion target—1 billion more people benefiting from UHC by 2023—we estimated additional population equivalents with UHC effective coverage from 2018 to 2023. Findings Globally, performance on the UHC effective coverage index improved from 45·8 (95% uncertainty interval 44·2–47·5) in 1990 to 60·3 (58·7–61·9) in 2019, yet country-level UHC effective coverage in 2019 still spanned from 95 or higher in Japan and Iceland to lower than 25 in Somalia and the Central African Republic. Since 2010, sub-Saharan Africa showed accelerated gains on the UHC effective coverage index (at an average increase of 2·6% [1·9–3·3] per year up to 2019); by contrast, most other GBD super-regions had slowed rates of progress in 2010–2019 relative to 1990–2010. Many countries showed lagging performance on effective coverage indicators for non-communicable diseases relative to those for communicable diseases and maternal and child health, despite non-communicable diseases accounting for a greater proportion of potential health gains in 2019, suggesting that many health systems are not keeping pace with the rising non-communicable disease burden and associated population health needs. In 2019, the UHC effective coverage index was associated with pooled health spending per capita (r=0·79), although countries across the development spectrum had much lower UHC effective coverage than is potentially achievable relative to their health spending. Under maximum efficiency of translating health spending into UHC effective coverage performance, countries would need to reach 1398pooledhealthspendingpercapita(US1398 pooled health spending per capita (US adjusted for purchasing power parity) in order to achieve 80 on the UHC effective coverage index. From 2018 to 2023, an estimated 388·9 million (358·6–421·3) more population equivalents would have UHC effective coverage, falling well short of the GPW13 target of 1 billion more people benefiting from UHC during this time. Current projections point to an estimated 3·1 billion (3·0–3·2) population equivalents still lacking UHC effective coverage in 2023, with nearly a third (968·1 million [903·5–1040·3]) residing in south Asia. Interpretation The present study demonstrates the utility of measuring effective coverage and its role in supporting improved health outcomes for all people—the ultimate goal of UHC and its achievement. Global ambitions to accelerate progress on UHC service coverage are increasingly unlikely unless concerted action on non-communicable diseases occurs and countries can better translate health spending into improved performance. Focusing on effective coverage and accounting for the world's evolving health needs lays the groundwork for better understanding how close—or how far—all populations are in benefiting from UHC. Funding Bill & Melinda Gates Foundation
    corecore