300 research outputs found

    WHO decides what is fair? International HIV treatment guidelines, social value judgements and equitable provision of lifesaving antiretroviral therapy

    Get PDF
    The new 2013 WHO Consolidated Guidelines on the Use of Antiretroviral Therapy (ART) make aspirational recommendations for ART delivery in low and middle income countries. Comprehensive assessments of available evidence were undertaken and the recommendations made are likely to improve individual health outcomes. However feasibility was downplayed, the Guidelines represent high-cost policy options not all of which are compatible with the core public health principles of decentralization; task-shifting; and a commitment to universality. Critically, their impact on equity and the population-level distribution of health outcomes were not fully considered. We analyze the likely distribution of health outcomes resulting from alternative ways of realising the 2013 Guidelines and assess practicality, feasibility and health attainment amongst different sections of the population in the context of financial and human resource constraints. Claim can be made that direct interpretation of the Guidelines follows a "human rights" based approach in seeking to provide individual patients with the best alternatives amongst those available on the basis of current evidence. However, there lies a basic conflict between this and "consequentialist" public health based approaches that provide more equal population-level outcomes. When determining how to respond to the 2013 Guidelines and fairly allocate scarce lifesaving resources, national policymakers must carefully consider the distribution of outcomes and the underpinning social value judgements required to inform policy choice. It is important to consider whose values should determine what is a just distribution of health outcomes. The WHO Guidelines committees are well placed to compile evidence on the costs and effects of health care alternatives. However, their mandate for making distributional social value judgements remains unclear

    Combining factorial and multi-arm multi-stage platform designs to evaluate multiple interventions efficiently

    Get PDF
    BACKGROUND: Factorial-MAMS design platform designs have many advantages, but the practical advantages and disadvantages of combining the two designs have not been explored. METHODS: We propose practical methods for a combined design within the platform trial paradigm where some interventions are not expected to interact and could be given together. RESULTS: We describe the combined design and suggest diagrams that can be used to represent it. Many properties are common both to standard factorial designs, including the need to consider interactions between interventions and the impact of intervention efficacy on power of other comparisons, and to standard multi-arm multi-stage designs, including the need to pre-specify procedures for starting and stopping intervention comparisons. We also identify some specific features of the factorial-MAMS design: timing of interim and final analyses should be determined by calendar time or total observed events; some non-factorial modifications may be useful; eligibility criteria should be broad enough to include any patient eligible for any part of the randomisation; stratified randomisation may conveniently be performed sequentially; and analysis requires special care to use only concurrent controls. CONCLUSION: A combined factorial-MAMS design can combine the efficiencies of factorial trials and multi-arm multi-stage platform trials. It allows us to address multiple research questions under one protocol and to test multiple new treatment options, which is particularly important when facing a new emergent infection such as COVID-19

    Bone mineral density among children living with HIV failing first-line anti-retroviral therapy in Uganda: A sub-study of the CHAPAS-4 trial

    Get PDF
    BACKGROUND: Children living with perinatally acquired HIV (CLWH) survive into adulthood on antiretroviral therapy (ART). HIV, ART, and malnutrition can all lead to low bone mineral density (BMD). Few studies have described bone health among CLWH in Sub-Saharan Africa. We determined the prevalence and factors associated with low BMD among CLWH switching to second-line ART in the CHAPAS-4 trial (ISRCTN22964075) in Uganda. METHODS: BMD was determined using dual-energy X-ray Absorptiometry (DXA). BMD Z-scores were adjusted for age, sex, height and race. Demographic characteristics were summarized using median interquartile range (IQR) for continuous variables and proportions for categorical variables. Logistic regression was used to determine the associations between each variable and low BMD. RESULTS: A total of 159 children were enrolled (50% male) with median age (IQR) 10 (7-12) years, median duration of first -line ART 5.2(3.3-6.8) years; CD4 count 774 (528-1083) cells/mm3, weight-for-age Z-score -1.36 (-2.19, -0.65) and body mass index Z-score (BMIZ) -1.31 (-2.06, -0.6). Low (Z-score≤ -2) total body less head (TBLH) BMD was observed in 28 (18%) children, 21(13%) had low lumbar spine (LS) BMD, and15 (9%) had both. Low TBLH BMD was associated with increasing age (adjusted odds ratio [aOR] 1.37; 95% CI: 1.13-1.65, p = 0.001), female sex (aOR: 3.8; 95% CL: 1.31-10.81, p = 0.014), low BMI (aOR 0.36:95% CI: 0.21-0.61, p<0.001), and first-line zidovudine exposure (aOR: 3.68; 95% CI: 1.25-10.8, p = 0.018). CD4 count, viral load and first- line ART duration were not associated with TBLH BMD. Low LS BMD was associated with increasing age (aOR 1.42; 95% CI: 1.16-1.74, p = 0.001) and female sex: (aOR 3.41; 95% CI: 1.18-9.8, p = 0.023). CONCLUSION: Nearly 20% CLWH failing first-line ART had low BMD which was associated with female sex, older age, first-line ZDV exposure, and low BMI. Prevention, monitoring, and implications following transition to adult care should be prioritized to identify poor bone health in HIV+adolescents entering adulthood

    Limited sampling models to predict the pharmacokinetics of nevirapine, stavudine, and lamivudine in HIV-infected children treated with pediatric fixed-dose combination tablets.

    No full text
    Full 12-hour pharmacokinetic profiles of nevirapine, stavudine, and lamivudine in HIV-infected children taking fixed-dose combination antiretroviral tablets have been reported previously by us. Further studies with these formulations could benefit from less-intensive pharmacokinetic sampling. Data from 65 African children were used to relate area under the plasma concentration versus time curve over 12 hours (AUC) to plasma concentrations of nevirapine, stavudine, or lamivudine at times t = 0, 1, 2, 4, 6, 8, and 12 hours after intake using linear regression. Limited sampling models were developed using leave-one-out crossvalidation. The predictive performance of each model was evaluated using the mean relative prediction error (mpe%) as an indicator of bias and the root mean squared relative prediction error (rmse%) as a measure of precision. A priori set criteria to accept a limited sampling model were: 95% confidence limit of the mpe% should include 0, rmse% less than 10%, a high correlation coefficient, and as few (convenient) samples as possible. Using only one sample did not lead to acceptable AUC predictions for stavudine or lamivudine, although the 6-hour sample was acceptable for nevirapine (mpe%: -0.8%, 95% confidence interval: -2.2 to +0.6); rmse%: 5.8%; r: 0.98). Using two samples, AUC predictions for stavudine and lamivudine improved considerably but did not meet the predefined acceptance criteria. Using three samples (1, 2, 6 hours), an accurate and precise limited sampling model for stavudine AUC (mpe%: -0.6%, 95% confidence interval: -2.2 to +1.0; rmse%: 6.5%; r: 0.98) and lamivudine AUC (mpe%: -0.3%, 95% confidence interval: -1.7 to +1.1; rmse%: 5.6%; r: 0.99) was found; this model was also highly accurate and precise for nevirapine AUC (mpe%: -0.2%, 95% confidence interval: -1.0 to +0.7; rmse%: 3.4%; r: 0.99). A limited sampling model using three time points (1, 2, 6 hours) can be used to predict nevirapine, stavudine, and lamivudine AUC accurately and precisely in HIV-infected African children

    Experiences of transition to adult care and readiness to self-manage care in young people with perinatal HIV in England

    Get PDF
    Background: There are few data on young people’s own experiences of transferring from paediatric to adult care, or readiness to self-manage care. Methods: A total of 132 young people living with perinatal HIV, aged 14–25 years, answered questions about transition experiences. Results: Of the participants, 45 (34%), with a median age of 16 (interquartile range [IQR] 16–17), were in paediatric care, of whom 89% reported that transition discussions had begun, at median age 15 (IQR 14–16) years. Young people in adult care were more likely than those in paediatric care to self manage appointments (90% vs 42% respectively, P < 0.001), and know their antiretroviral therapy (ART) drugs (55% vs 37%, P = 0.033). Knowledge of most recent CD4 T cell count/VL was slightly better for those in adult care (48% vs 31%, P = 0.059); naming side effects of ART was similar (71% vs 60%, P = 0.119). Conclusions: Transition discussions occurred before movement from paediatric to adult care. Further education around ART, potential side effects, and CD4 T cell count/viral load knowledge is required

    Immediate Versus Triggered Transfusion for Children with Uncomplicated Severe Anaemia

    Get PDF
    Background: The World Health Organization recommends a haemoglobin transfusion threshold of 0.2) nor evidence of differences between groups in re-admissions (p=0.36), serious adverse events (p=0.36) nor in haemoglobin recovery at 180-days (p=0.08). Length-of-stay was mean 0.9 days longer in the triggered group. Conclusions: There was no evidence of differences in clinical outcomes over 6 months with triggered vs immediate transfusion. Triggered transfusion reduced blood-volume requirements by 60% but increased length-of-stay by 20% and required repeated haemoglobin monitoring and surveillance

    Marginal structural models for repeated measures where intercept and slope are correlated: An application exploring the benefit of nutritional supplements on weight gain in HIV-infected children initiating antiretroviral therapy

    Get PDF
    BackgroundThe impact of nutritional supplements on weight gain in HIV-infected children on antiretroviral treatment (ART) remains uncertain. Starting supplements depends upon current weight-for-age or other acute malnutrition indicators, producing time-dependent confounding. However, weight-for-age at ART initiation may affect subsequent weight gain, independent of supplement use. Implications for marginal structural models (MSMs) with inverse probability of treatment weights (IPTW) are unclear.MethodsIn the ARROW trial, non-randomised supplement use and weight-for-age were recorded monthly from ART initiation. The effect of supplements on weight-for-age over the first year was estimated using generalised estimating equation MSMs with IPTW, both with and without interaction terms between baseline weight-for-age and time. Separately, data were simulated assuming no supplement effect, with use depending on current weight-for-age, and weight-for-age trajectory depending on baseline weight-for-age to investigate potential bias associated with different MSM specifications.ResultsIn simulations, despite correctly specifying IPTW, omitting an interaction in the MSM between baseline weight-for-age and time produced increasingly biased estimates as associations between baseline weight-for-age and subsequent weight trajectory increased. Estimates were unbiased when the interaction between baseline weight-for-age and time was included, even if the data were simulated with no such interaction. In ARROW, without an interaction the estimated effect was +0.09 (95%CI +0.02,+0.16) greater weight-for-age gain per month's supplement use; this reduced to +0.03 (-0.04,+0.10) including the interaction.DiscussionThis study highlights a specific situation in which MSM model misspecification can occur and impact the resulting estimate. Since an interaction in the MSM (outcome) model does not bias the estimate of effect if the interaction does not exist, it may be advisable to include such a term when fitting MSMs for repeated measures

    The evolution of HIV-1 reverse transcriptase in route to acquisition of Q151M multi-drug resistance is complex and involves mutations in multiple domains

    Get PDF
    Background: The Q151M multi-drug resistance (MDR) pathway in HIV-1 reverse transcriptase (RT) confers reduced susceptibility to all nucleoside reverse transcriptase inhibitors (NRTIs) excluding tenofovir (TDF). This pathway emerges after long term failure of therapy, and is increasingly observed in the resource poor world, where antiretroviral therapy is rarely accompanied by intensive virological monitoring. In this study we examined the genotypic, phenotypic and fitness correlates associated with the development of Q151M MDR in the absence of viral load monitoring.Results: Single-genome sequencing (SGS) of full-length RT was carried out on sequential samples from an HIV-infected individual enrolled in ART rollout. The emergence of Q151M MDR occurred in the order A62V, V75I, and finally Q151M on the same genome at 4, 17 and 37 months after initiation of therapy, respectively. This was accompanied by a parallel cumulative acquisition of mutations at 20 other codon positions; seven of which were located in the connection subdomain. We established that fourteen of these mutations are also observed in Q151M-containing sequences submitted to the Stanford University HIV database. Phenotypic drug susceptibility testing demonstrated that the Q151M-containing RT had reduced susceptibility to all NRTIs except for TDF. RT domain-swapping of patient and wild-type RTs showed that patient-derived connection subdomains were not associated with reduced NRTI susceptibility. However, the virus expressing patient-derived Q151M RT at 37 months demonstrated similar to 44% replicative capacity of that at 4 months. This was further reduced to similar to 22% when the Q151M-containing DNA pol domain was expressed with wild-type C-terminal domain, but was then fully compensated by coexpression of the coevolved connection subdomain.Conclusions: We demonstrate a complex interplay between drug susceptibility and replicative fitness in the acquisition Q151M MDR with serious implications for second-line regimen options. The acquisition of the Q151M pathway occurred sequentially over a long period of failing NRTI therapy, and was associated with mutations in multiple RT domains

    Cost effectiveness analysis of clinically driven versus routine laboratory monitoring of antiretroviral therapy in Uganda and Zimbabwe.

    Get PDF
    BACKGROUND: Despite funding constraints for treatment programmes in Africa, the costs and economic consequences of routine laboratory monitoring for efficacy and toxicity of antiretroviral therapy (ART) have rarely been evaluated. METHODS: Cost-effectiveness analysis was conducted in the DART trial (ISRCTN13968779). Adults in Uganda/Zimbabwe starting ART were randomised to clinically-driven monitoring (CDM) or laboratory and clinical monitoring (LCM); individual patient data on healthcare resource utilisation and outcomes were valued with primary economic costs and utilities. Total costs of first/second-line ART, routine 12-weekly CD4 and biochemistry/haematology tests, additional diagnostic investigations, clinic visits, concomitant medications and hospitalisations were considered from the public healthcare sector perspective. A Markov model was used to extrapolate costs and benefits 20 years beyond the trial. RESULTS: 3316 (1660LCM;1656CDM) symptomatic, immunosuppressed ART-naive adults (median (IQR) age 37 (32,42); CD4 86 (31,139) cells/mm(3)) were followed for median 4.9 years. LCM had a mean 0.112 year (41 days) survival benefit at an additional mean cost of 765[95765 [95%CI:685,845], translating into an adjusted incremental cost of 7386 [3277,dominated] per life-year gained and 7793[4442,39179]perqualityadjustedlifeyeargained.Routinetoxicitytestswereprominentcostdriversandhadnobenefit.With12weeklyCD4monitoringfromyear2onART,lowcostsecondlineART,butwithouttoxicitymonitoring,CD4testcostsneedtofallbelow7793 [4442,39179] per quality-adjusted life year gained. Routine toxicity tests were prominent cost-drivers and had no benefit. With 12-weekly CD4 monitoring from year 2 on ART, low-cost second-line ART, but without toxicity monitoring, CD4 test costs need to fall below 3.78 to become cost-effective (<3xper-capita GDP, following WHO benchmarks). CD4 monitoring at current costs as undertaken in DART was not cost-effective in the long-term. CONCLUSIONS: There is no rationale for routine toxicity monitoring, which did not affect outcomes and was costly. Even though beneficial, there is little justification for routine 12-weekly CD4 monitoring of ART at current test costs in low-income African countries. CD4 monitoring, restricted to the second year on ART onwards, could be cost-effective with lower cost second-line therapy and development of a cheaper, ideally point-of-care, CD4 test

    The impact of different CD4 monitoring and switching strategies on mortality in HIV-infected African adults on antiretroviral therapy; an application of dynamic marginal structural models

    Get PDF
    In Africa, antiretroviral therapy (ART) is delivered with limited laboratory monitoring, often none. In 2003–2004, investigators in the Development of Antiretroviral Therapy in Africa (DART) Trial randomized persons initiating ART in Uganda and Zimbabwe to either laboratory and clinical monitoring (LCM) or clinically driven monitoring (CDM). CD4 cell counts were measured every 12 weeks in both groups but were only returned to treating clinicians for management in the LCM group. Follow-up continued through 2008. In observational analyses, dynamic marginal structural models on pooled randomized groups were used to estimate survival under different monitoring-frequency and clinical/immunological switching strategies. Assumptions included no direct effect of randomized group on mortality or confounders and no unmeasured confounders which influenced treatment switch and mortality or treatment switch and time-dependent covariates. After 48 weeks of first-line ART, 2,946 individuals contributed 11,351 person-years of follow-up, 625 switches, and 179 deaths. The estimated survival probability after a further 240 weeks for post-48-week switch at the first CD4 cell count less than 100 cells/mm3 or non-Candida World Health Organization stage 4 event (with CD4 count <250) was 0.96 (95% confidence interval (CI): 0.94, 0.97) with 12-weekly CD4 testing, 0.96 (95% CI: 0.95, 0.97) with 24-weekly CD4 testing, 0.95 (95% CI: 0.93, 0.96) with a single CD4 test at 48 weeks (baseline), and 0.92 (95% CI: 0.91, 0.94) with no CD4 testing. Comparing randomized groups by 48-week CD4 count, the mortality risk associated with CDM versus LCM was greater in persons with CD4 counts of <100 (hazard ratio = 2.4, 95% CI: 1.3, 4.3) than in those with CD4 counts of ≥100 (hazard ratio = 1.1, 95% CI: 0.8, 1.7; interaction P = 0.04). These findings support a benefit from identifying patients immunologically failing first-line ART at 48 weeks
    corecore