52 research outputs found

    Mannose-binding lectin-deficient genotypes as a risk factor of pneumococcal meningitis in infants

    Get PDF
    OBJECTIVES: The objective of this study was to evaluate to evaluate the role of mannose-binding-lectin deficient genotypes in pneumococcal meningitis (PM) in children. METHODS: We performed a 16-year retrospective study (January 2001 to March 2016) including patients ≤ 18 years with PM. Variables including attack rate of pneumococcal serotype (high or low invasive capacity) and MBL2 genotypes associated with low serum MBL levels were recorded. RESULTS: Forty-eight patients were included in the study. Median age was 18.5 months and 17/48 episodes (35.4%) occurred in children ≤ 12 months old. Serotypes with high-invasive disease potential were identified in 15/48 episodes (31.2%). MBL2 deficient genotypes accounted for 18.8% (9/48). Children ≤ 12 months old had a 7-fold risk (95% CI: 1.6-29.9; p 12 months old. A sub-analysis of patients by age group revealed significant proportions of carriers of MBL2 deficient genotypes among those ≤ 12 months old with PM caused by opportunistic serotypes (54.5%), admitted to the PICU (Pediatric Intensive Care Unit) (46.7%) and of White ethnicity (35.7%). These proportions were significantly higher than in older children (all p<0.05). CONCLUSIONS: Our results suggest that differences in MBL2 genotype in children ≤12 months old affects susceptibility to PM, and it may have an important role in the episodes caused by non-high invasive disease potential serotypes

    The HIV continuum of care in European Union countries in 2013: data and challenges

    Get PDF
    BACKGROUND: UNAIDS has set a 90-90-90 target to curb the HIV epidemic by 2020, but methods used to assess whether countries have reached this target are not standardised, hindering comparisons. METHODS: Through a collaboration formed by the European Centre for Disease Prevention and Control (ECDC) with European HIV cohorts and surveillance agencies, we constructed a standardised, four-stage continuum of HIV care for 11 European Union (EU) countries for 2013. Stages were defined as: 1) number of people living with HIV (PLHIV) in the country by end of 2013; 2) proportion of stage 1 ever diagnosed; 3) proportion of stage 2 ever initiated ART; and 4) proportion of stage 3 who became virally-suppressed (≤200 copies/mL). Case surveillance data were used primarily to derive stages 1 (using back-calculation models) and 2, and cohort data for stages 3 and 4. RESULTS: In 2013, 674,500 people in the 11 countries were estimated to be living with HIV, ranging from 5,500 to 153,400 in each country. Overall HIV prevalence was 0.22% (range 0.09%-0.36%). Overall proportions, of each previous stage, were 84% diagnosed, 84% on ART, and 85% virally-suppressed (60% of PLHIV). Two countries achieved ≥90% for all stages, and over half had reached ≥90% for at least one stage. CONCLUSIONS: EU countries are nearing the 90-90-90 target. Reducing the proportion undiagnosed remains the greatest barrier to achieving this target, suggesting further efforts are needed to improve HIV testing rates. Standardising methods to derive comparable continuums of care remains a challenge

    The Human Immunodeficiency Virus Continuum of Care in European Union Countries in 2013: Data and Challenges

    Get PDF
    The Joint United Nations Programme on HIV/AIDS (UNAIDS) has set a "90-90-90" target to curb the human immunodeficiency virus (HIV) epidemic by 2020, but methods used to assess whether countries have reached this target are not standardized, hindering comparisons. Through a collaboration formed by the European Centre for Disease Prevention and Control (ECDC) with European HIV cohorts and surveillance agencies, we constructed a standardized, 4-stage continuum of HIV care for 11 European Union countries for 2013. Stages were defined as (1) number of people living with HIV in the country by end of 2013; (2) proportion of stage 1 ever diagnosed; (3) proportion of stage 2 that ever initiated ART; and (4) proportion of stage 3 who became virally suppressed (≤200 copies/mL). Case surveillance data were used primarily to derive stages 1 (using back-calculation models) and 2, and cohort data for stages 3 and 4. In 2013, 674500 people in the 11 countries were estimated to be living with HIV, ranging from 5500 to 153400 in each country. Overall HIV prevalence was 0.22% (range, 0.09%-0.36%). Overall proportions of each previous stage were 84% diagnosed, 84% on ART, and 85% virally suppressed (60% of people living with HIV). Two countries achieved ≥90% for all stages, and more than half had reached ≥90% for at least 1 stage. European Union countries are nearing the 90-90-90 target. Reducing the proportion undiagnosed remains the greatest barrier to achieving this target, suggesting that further efforts are needed to improve HIV testing rates. Standardizing methods to derive comparable continuums of care remains a challeng

    Comparison of dynamic monitoring strategies based on CD4 cell counts in virally suppressed, HIV-positive individuals on combination antiretroviral therapy in high-income countries: a prospective, observational study

    Get PDF
    BACKGROUND: Clinical guidelines vary with respect to the optimal monitoring frequency of HIV-positive individuals. We compared dynamic monitoring strategies based on time-varying CD4 cell counts in virologically suppressed HIV-positive individuals. METHODS: In this observational study, we used data from prospective studies of HIV-positive individuals in Europe (France, Greece, the Netherlands, Spain, Switzerland, and the UK) and North and South America (Brazil, Canada, and the USA) in The HIV-CAUSAL Collaboration and The Centers for AIDS Research Network of Integrated Clinical Systems. We compared three monitoring strategies that differ in the threshold used to measure CD4 cell count and HIV RNA viral load every 3–6 months (when below the threshold) or every 9–12 months (when above the threshold). The strategies were defined by the threshold CD4 counts of 200 cells per μL, 350 cells per μL, and 500 cells per μL. Using inverse probability weighting to adjust for baseline and time-varying confounders, we estimated hazard ratios (HRs) of death and of AIDS-defining illness or death, risk ratios of virological failure, and mean differences in CD4 cell count. FINDINGS: 47 635 individuals initiated an antiretroviral therapy regimen between Jan 1, 2000, and Jan 9, 2015, and met the eligibility criteria for inclusion in our study. During follow-up, CD4 cell count was measured on average every 4·0 months and viral load every 3·8 months. 464 individuals died (107 in threshold 200 strategy, 157 in threshold 350, and 200 in threshold 500) and 1091 had AIDS-defining illnesses or died (267 in threshold 200 strategy, 365 in threshold 350, and 459 in threshold 500). Compared with threshold 500, the mortality HR was 1·05 (95% CI 0·86–1·29) for threshold 200 and 1·02 (0·91·1·14) for threshold 350. Corresponding estimates for death or AIDS-defining illness were 1·08 (0·95–1·22) for threshold 200 and 1·03 (0·96–1·12) for threshold 350. Compared with threshold 500, the 24 month risk ratios of virological failure (viral load more than 200 copies per mL) were 2·01 (1·17–3·43) for threshold 200 and 1·24 (0·89–1·73) for threshold 350, and 24 month mean CD4 cell count differences were 0·4 (−25·5 to 26·3) cells per μL for threshold 200 and −3·5 (−16·0 to 8·9) cells per μL for threshold 350. INTERPRETATION: Decreasing monitoring to annually when CD4 count is higher than 200 cells per μL compared with higher than 500 cells per μL does not worsen the short-term clinical and immunological outcomes of virally suppressed HIV-positive individuals. However, more frequent virological monitoring might be necessary to reduce the risk of virological failure. Further follow-up studies are needed to establish the long-term safety of these strategies. FUNDING National Institutes of Health

    Using observational data to emulate a randomized trial of dynamic treatment switching strategies

    Get PDF
    BACKGROUND: When a clinical treatment fails or shows suboptimal results, the question of when to switch to another treatment arises. Treatment switching strategies are often dynamic because the time of switching depends on the evolution of an individual's time-varying covariates. Dynamic strategies can be directly compared in randomized trials. For example, HIV-infected individuals receiving antiretroviral therapy could be randomized to switching therapy within 90 days of HIV-1 RNA crossing above a threshold of either 400 copies/ml (tight-control strategy) or 1000 copies/ml (loose-control strategy).METHODS: We review an approach to emulate a randomized trial of dynamic switching strategies using observational data from the Antiretroviral Therapy Cohort Collaboration, the Centers for AIDS Research Network of Integrated Clinical Systems and the HIV-CAUSAL Collaboration. We estimated the comparative effect of tight-control vs. loose-control strategies on death and AIDS or death via inverse-probability weighting.RESULTS: Of 43 803 individuals who initiated an eligible antiretroviral therapy regimen in 2002 or later, 2001 met the baseline inclusion criteria for the mortality analysis and 1641 for the AIDS or death analysis. There were 21 deaths and 33 AIDS or death events in the tight-control group, and 28 deaths and 41 AIDS or death events in the loose-control group. Compared with tight control, the adjusted hazard ratios (95% confidence interval) for loose control were 1.10 (0.73, 1.66) for death, and 1.04 (0.86, 1.27) for AIDS or death.CONCLUSIONS: Although our effective sample sizes were small and our estimates imprecise, the described methodological approach can serve as an example for future analyses

    Comparison of dynamic monitoring strategies based on CD4 cell counts in virally suppressed, HIV-positive individuals on combination antiretroviral therapy in high-income countries: a prospective, observational study

    Get PDF
    BACKGROUND Clinical guidelines vary with respect to the optimal monitoring frequency of HIV-positive individuals. We compared dynamic monitoring strategies based on time-varying CD4 cell counts in virologically suppressed HIV-positive individuals. METHODS In this observational study, we used data from prospective studies of HIV-positive individuals in Europe (France, Greece, the Netherlands, Spain, Switzerland, and the UK) and North and South America (Brazil, Canada, and the USA) in The HIV-CAUSAL Collaboration and The Centers for AIDS Research Network of Integrated Clinical Systems. We compared three monitoring strategies that differ in the threshold used to measure CD4 cell count and HIV RNA viral load every 3-6 months (when below the threshold) or every 9-12 months (when above the threshold). The strategies were defined by the threshold CD4 counts of 200 cells per μL, 350 cells per μL, and 500 cells per μL. Using inverse probability weighting to adjust for baseline and time-varying confounders, we estimated hazard ratios (HRs) of death and of AIDS-defining illness or death, risk ratios of virological failure, and mean differences in CD4 cell count. FINDINGS 47 635 individuals initiated an antiretroviral therapy regimen between Jan 1, 2000, and Jan 9, 2015, and met the eligibility criteria for inclusion in our study. During follow-up, CD4 cell count was measured on average every 4·0 months and viral load every 3·8 months. 464 individuals died (107 in threshold 200 strategy, 157 in threshold 350, and 200 in threshold 500) and 1091 had AIDS-defining illnesses or died (267 in threshold 200 strategy, 365 in threshold 350, and 459 in threshold 500). Compared with threshold 500, the mortality HR was 1·05 (95% CI 0·86-1·29) for threshold 200 and 1·02 (0·91·1·14) for threshold 350. Corresponding estimates for death or AIDS-defining illness were 1·08 (0·95-1·22) for threshold 200 and 1·03 (0·96-1·12) for threshold 350. Compared with threshold 500, the 24 month risk ratios of virological failure (viral load more than 200 copies per mL) were 2·01 (1·17-3·43) for threshold 200 and 1·24 (0·89-1·73) for threshold 350, and 24 month mean CD4 cell count differences were 0·4 (-25·5 to 26·3) cells per μL for threshold 200 and -3·5 (-16·0 to 8·9) cells per μL for threshold 350. INTERPRETATION Decreasing monitoring to annually when CD4 count is higher than 200 cells per μL compared with higher than 500 cells per μL does not worsen the short-term clinical and immunological outcomes of virally suppressed HIV-positive individuals. However, more frequent virological monitoring might be necessary to reduce the risk of virological failure. Further follow-up studies are needed to establish the long-term safety of these strategies. FUNDING National Institutes of Health

    Comparison of dynamic monitoring strategies based on CD4 cell counts in virally suppressed, HIV-positive individuals on combination antiretroviral therapy in high-income countries:a prospective, observational study

    Get PDF
    Clinical guidelines vary with respect to the optimal monitoring frequency of HIV-positive individuals. We compared dynamic monitoring strategies based on time-varying CD4 cell counts in virologically suppressed HIV-positive individuals. In this observational study, we used data from prospective studies of HIV-positive individuals in Europe (France, Greece, the Netherlands, Spain, Switzerland, and the UK) and North and South America (Brazil, Canada, and the USA) in The HIV-CAUSAL Collaboration and The Centers for AIDS Research Network of Integrated Clinical Systems. We compared three monitoring strategies that differ in the threshold used to measure CD4 cell count and HIV RNA viral load every 3-6 months (when below the threshold) or every 9-12 months (when above the threshold). The strategies were defined by the threshold CD4 counts of 200 cells per μL, 350 cells per μL, and 500 cells per μL. Using inverse probability weighting to adjust for baseline and time-varying confounders, we estimated hazard ratios (HRs) of death and of AIDS-defining illness or death, risk ratios of virological failure, and mean differences in CD4 cell count. 47 635 individuals initiated an antiretroviral therapy regimen between Jan 1, 2000, and Jan 9, 2015, and met the eligibility criteria for inclusion in our study. During follow-up, CD4 cell count was measured on average every 4·0 months and viral load every 3·8 months. 464 individuals died (107 in threshold 200 strategy, 157 in threshold 350, and 200 in threshold 500) and 1091 had AIDS-defining illnesses or died (267 in threshold 200 strategy, 365 in threshold 350, and 459 in threshold 500). Compared with threshold 500, the mortality HR was 1·05 (95% CI 0·86-1·29) for threshold 200 and 1·02 (0·91·1·14) for threshold 350. Corresponding estimates for death or AIDS-defining illness were 1·08 (0·95-1·22) for threshold 200 and 1·03 (0·96-1·12) for threshold 350. Compared with threshold 500, the 24 month risk ratios of virological failure (viral load more than 200 copies per mL) were 2·01 (1·17-3·43) for threshold 200 and 1·24 (0·89-1·73) for threshold 350, and 24 month mean CD4 cell count differences were 0·4 (-25·5 to 26·3) cells per μL for threshold 200 and -3·5 (-16·0 to 8·9) cells per μL for threshold 350. Decreasing monitoring to annually when CD4 count is higher than 200 cells per μL compared with higher than 500 cells per μL does not worsen the short-term clinical and immunological outcomes of virally suppressed HIV-positive individuals. However, more frequent virological monitoring might be necessary to reduce the risk of virological failure. Further follow-up studies are needed to establish the long-term safety of these strategies. National Institutes of Healt

    Healthcare Access and Quality Index based on mortality from causes amenable to personal health care in 195 countries and territories, 1990-2015 : a novel analysis from the Global Burden of Disease Study 2015

    Get PDF
    Background National levels of personal health-care access and quality can be approximated by measuring mortality rates from causes that should not be fatal in the presence of effective medical care (ie, amenable mortality). Previous analyses of mortality amenable to health care only focused on high-income countries and faced several methodological challenges. In the present analysis, we use the highly standardised cause of death and risk factor estimates generated through the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) to improve and expand the quantification of personal health-care access and quality for 195 countries and territories from 1990 to 2015. Methods We mapped the most widely used list of causes amenable to personal health care developed by Nolte and McKee to 32 GBD causes. We accounted for variations in cause of death certification and misclassifications through the extensive data standardisation processes and redistribution algorithms developed for GBD. To isolate the effects of personal health-care access and quality, we risk-standardised cause-specific mortality rates for each geography-year by removing the joint effects of local environmental and behavioural risks, and adding back the global levels of risk exposure as estimated for GBD 2015. We employed principal component analysis to create a single, interpretable summary measure-the Healthcare Quality and Access (HAQ) Index-on a scale of 0 to 100. The HAQ Index showed strong convergence validity as compared with other health-system indicators, including health expenditure per capita (r= 0.88), an index of 11 universal health coverage interventions (r= 0.83), and human resources for health per 1000 (r= 0.77). We used free disposal hull analysis with bootstrapping to produce a frontier based on the relationship between the HAQ Index and the Socio-demographic Index (SDI), a measure of overall development consisting of income per capita, average years of education, and total fertility rates. This frontier allowed us to better quantify the maximum levels of personal health-care access and quality achieved across the development spectrum, and pinpoint geographies where gaps between observed and potential levels have narrowed or widened over time. Findings Between 1990 and 2015, nearly all countries and territories saw their HAQ Index values improve; nonetheless, the difference between the highest and lowest observed HAQ Index was larger in 2015 than in 1990, ranging from 28.6 to 94.6. Of 195 geographies, 167 had statistically significant increases in HAQ Index levels since 1990, with South Korea, Turkey, Peru, China, and the Maldives recording among the largest gains by 2015. Performance on the HAQ Index and individual causes showed distinct patterns by region and level of development, yet substantial heterogeneities emerged for several causes, including cancers in highest-SDI countries; chronic kidney disease, diabetes, diarrhoeal diseases, and lower respiratory infections among middle-SDI countries; and measles and tetanus among lowest-SDI countries. While the global HAQ Index average rose from 40.7 (95% uncertainty interval, 39.0-42.8) in 1990 to 53.7 (52.2-55.4) in 2015, far less progress occurred in narrowing the gap between observed HAQ Index values and maximum levels achieved; at the global level, the difference between the observed and frontier HAQ Index only decreased from 21.2 in 1990 to 20.1 in 2015. If every country and territory had achieved the highest observed HAQ Index by their corresponding level of SDI, the global average would have been 73.8 in 2015. Several countries, particularly in eastern and western sub-Saharan Africa, reached HAQ Index values similar to or beyond their development levels, whereas others, namely in southern sub-Saharan Africa, the Middle East, and south Asia, lagged behind what geographies of similar development attained between 1990 and 2015. Interpretation This novel extension of the GBD Study shows the untapped potential for personal health-care access and quality improvement across the development spectrum. Amid substantive advances in personal health care at the national level, heterogeneous patterns for individual causes in given countries or territories suggest that few places have consistently achieved optimal health-care access and quality across health-system functions and therapeutic areas. This is especially evident in middle-SDI countries, many of which have recently undergone or are currently experiencing epidemiological transitions. The HAQ Index, if paired with other measures of health-systemcharacteristics such as intervention coverage, could provide a robust avenue for tracking progress on universal health coverage and identifying local priorities for strengthening personal health-care quality and access throughout the world. Copyright (C) The Author(s). Published by Elsevier Ltd.Peer reviewe

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries
    corecore