57 research outputs found

    The impact of different CD4 monitoring and switching strategies on mortality in HIV-infected African adults on antiretroviral therapy; an application of dynamic marginal structural models

    Get PDF
    In Africa, antiretroviral therapy (ART) is delivered with limited laboratory monitoring, often none. In 2003–2004, investigators in the Development of Antiretroviral Therapy in Africa (DART) Trial randomized persons initiating ART in Uganda and Zimbabwe to either laboratory and clinical monitoring (LCM) or clinically driven monitoring (CDM). CD4 cell counts were measured every 12 weeks in both groups but were only returned to treating clinicians for management in the LCM group. Follow-up continued through 2008. In observational analyses, dynamic marginal structural models on pooled randomized groups were used to estimate survival under different monitoring-frequency and clinical/immunological switching strategies. Assumptions included no direct effect of randomized group on mortality or confounders and no unmeasured confounders which influenced treatment switch and mortality or treatment switch and time-dependent covariates. After 48 weeks of first-line ART, 2,946 individuals contributed 11,351 person-years of follow-up, 625 switches, and 179 deaths. The estimated survival probability after a further 240 weeks for post-48-week switch at the first CD4 cell count less than 100 cells/mm3 or non-Candida World Health Organization stage 4 event (with CD4 count <250) was 0.96 (95% confidence interval (CI): 0.94, 0.97) with 12-weekly CD4 testing, 0.96 (95% CI: 0.95, 0.97) with 24-weekly CD4 testing, 0.95 (95% CI: 0.93, 0.96) with a single CD4 test at 48 weeks (baseline), and 0.92 (95% CI: 0.91, 0.94) with no CD4 testing. Comparing randomized groups by 48-week CD4 count, the mortality risk associated with CDM versus LCM was greater in persons with CD4 counts of <100 (hazard ratio = 2.4, 95% CI: 1.3, 4.3) than in those with CD4 counts of ≥100 (hazard ratio = 1.1, 95% CI: 0.8, 1.7; interaction P = 0.04). These findings support a benefit from identifying patients immunologically failing first-line ART at 48 weeks

    Virological outcomes of second-line protease inhibitor-based treatment for human immunodeficiency virus type 1 in a high-prevalence rural South African setting: a competing-risks prospective cohort analysis

    Get PDF
    Background. Second-line antiretroviral therapy (ART) based on ritonavir-boosted protease inhibitors (bPIs) represents the only available option after first-line failure for the majority of individuals living with human immunodeficiency virus (HIV) worldwide. Maximizing their effectiveness is imperative. Methods. This cohort study was nested within the French National Agency for AIDS and Viral Hepatitis Research (ANRS) 12249 Treatment as Prevention (TasP) cluster-randomized trial in rural KwaZulu-Natal, South Africa. We prospectively investigated risk factors for virological failure (VF) of bPI-based ART in the combined study arms. VF was defined by a plasma viral load >1000 copies/mL ≥6 months after initiating bPI-based ART. Cumulative incidence of VF was estimated and competing risk regression was used to derive the subdistribution hazard ratio (SHR) of the associations between VF and patient clinical and demographic factors, taking into account death and loss to follow-up. Results. One hundred one participants contributed 178.7 person-years of follow-up. Sixty-five percent were female; the median age was 37.4 years. Second-line ART regimens were based on ritonavir-boosted lopinavir, combined with zidovudine or tenofovir plus lamivudine or emtricitabine. The incidence of VF on second-line ART was 12.9 per 100 person-years (n = 23), and prevalence of VF at censoring was 17.8%. Thirteen of these 23 (56.5%) virologic failures resuppressed after a median of 8.0 months (interquartile range, 2.8-16.8 months) in this setting where viral load monitoring was available. Tuberculosis treatment was associated with VF (SHR, 11.50 [95% confidence interval, 3.92-33.74]; P < .001). Conclusions. Second-line VF was frequent in this setting. Resuppression occurred in more than half of failures, highlighting the value of viral load monitoring of second-line ART. Tuberculosis was associated with VF; therefore, novel approaches to optimize the effectiveness of PI-based ART in high-tuberculosis-burden settings are needed

    Cost effectiveness analysis of clinically driven versus routine laboratory monitoring of antiretroviral therapy in Uganda and Zimbabwe.

    Get PDF
    BACKGROUND: Despite funding constraints for treatment programmes in Africa, the costs and economic consequences of routine laboratory monitoring for efficacy and toxicity of antiretroviral therapy (ART) have rarely been evaluated. METHODS: Cost-effectiveness analysis was conducted in the DART trial (ISRCTN13968779). Adults in Uganda/Zimbabwe starting ART were randomised to clinically-driven monitoring (CDM) or laboratory and clinical monitoring (LCM); individual patient data on healthcare resource utilisation and outcomes were valued with primary economic costs and utilities. Total costs of first/second-line ART, routine 12-weekly CD4 and biochemistry/haematology tests, additional diagnostic investigations, clinic visits, concomitant medications and hospitalisations were considered from the public healthcare sector perspective. A Markov model was used to extrapolate costs and benefits 20 years beyond the trial. RESULTS: 3316 (1660LCM;1656CDM) symptomatic, immunosuppressed ART-naive adults (median (IQR) age 37 (32,42); CD4 86 (31,139) cells/mm(3)) were followed for median 4.9 years. LCM had a mean 0.112 year (41 days) survival benefit at an additional mean cost of 765[95765 [95%CI:685,845], translating into an adjusted incremental cost of 7386 [3277,dominated] per life-year gained and 7793[4442,39179]perquality−adjustedlifeyeargained.Routinetoxicitytestswereprominentcost−driversandhadnobenefit.With12−weeklyCD4monitoringfromyear2onART,low−costsecond−lineART,butwithouttoxicitymonitoring,CD4testcostsneedtofallbelow7793 [4442,39179] per quality-adjusted life year gained. Routine toxicity tests were prominent cost-drivers and had no benefit. With 12-weekly CD4 monitoring from year 2 on ART, low-cost second-line ART, but without toxicity monitoring, CD4 test costs need to fall below 3.78 to become cost-effective (<3xper-capita GDP, following WHO benchmarks). CD4 monitoring at current costs as undertaken in DART was not cost-effective in the long-term. CONCLUSIONS: There is no rationale for routine toxicity monitoring, which did not affect outcomes and was costly. Even though beneficial, there is little justification for routine 12-weekly CD4 monitoring of ART at current test costs in low-income African countries. CD4 monitoring, restricted to the second year on ART onwards, could be cost-effective with lower cost second-line therapy and development of a cheaper, ideally point-of-care, CD4 test

    High Rate of HIV Re-suppression After Viral Failure on First Line Antiretroviral Therapy in the Absence of Switch to Second Line.

    Get PDF
    In a randomised comparison of nevirapine or abacavir with zidovudine + lamivudine, routine viral load monitoring was not performed yet 27% of individuals with viral failure at w48 re-suppressed by w96 without switching. This supports WHO recommendations that suspected viral failure should trigger adherence counselling and repeat measurement before considering treatment switch

    Utility of total lymphocyte count as a surrogate marker for CD4 counts in HIV-1 infected children in Kenya

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In resource-limited settings, such as Kenya, access to CD4 testing is limited. Therefore, evaluation of less expensive laboratory diagnostics is urgently needed to diagnose immuno-suppression in children.</p> <p>Objectives</p> <p>To evaluate utility of total lymphocyte count (TLC) as surrogate marker for CD4 count in HIV-infected children.</p> <p>Methods</p> <p>This was a hospital based retrospective study conducted in three HIV clinics in Kisumu and Nairobi in Kenya. TLC, CD4 count and CD4 percent data were abstracted from hospital records of 487 antiretroviral-naïve HIV-infected children aged 1 month - 12 years.</p> <p>Results</p> <p>TLC and CD4 count were positively correlated (r = 0.66, p < 0.001) with highest correlation seen in children with severe immuno-suppression (r = 0.72, p < 0.001) and children >59 months of age (r = 0.68, p < 0.001). Children were considered to have severe immuno-suppression if they met the following WHO set CD4 count thresholds: age below 12 months (CD4 counts < 1500 cells/mm<sup>3</sup>), age 12-35 months (CD4 count < 750 cells/mm3), age 36-59 months (CD4 count < 350 cells/mm<sup>3</sup>, and age above 59 months (CD4 count < 200 cells/mm<sup>3</sup>). WHO recommended TLC threshold values for severe immuno-suppression of 4000, 3000, 2500 and 2000 cells/mm<sup>3 </sup>for age categories <12, 12-35, 36-59 and >59 months had low sensitivity of 25%, 23%, 33% and 62% respectively in predicting severe immuno-suppression using CD4 count as gold standard. Raising TLC thresholds to 7000, 6000, 4500 and 3000 cells/mm<sup>3 </sup>for each of the stated age categories increased sensitivity to 71%, 64%, 56% and 86%, with positive predictive values of 85%, 61%, 37%, 68% respectively but reduced specificity to 73%, 62%, 54% and 68% with negative predictive values of 54%, 65%, 71% and 87% respectively.</p> <p>Conclusion</p> <p>TLC is positively correlated with absolute CD4 count in children but current WHO age-specific thresholds had low sensitivity to identify severely immunosuppressed Kenyan children. Sensitivity and therefore utility of TLC to identify immuno-suppressed children may be improved by raising the TLC cut off levels across the various age categories.</p

    Risk factors for virological failure and subtherapeutic antiretroviral drug concentrations in HIV-positive adults treated in rural northwestern Uganda

    Get PDF
    ABSTRACT: BACKGROUND: Little is known about immunovirological treatment outcomes and adherence in HIV/AIDS patients on antiretroviral therapy (ART) treated using a simplified management approach in rural areas of developing countries, or about the main factors influencing those outcomes in clinical practice. METHODS: Cross-sectional immunovirological, pharmacological, and adherence outcomes were evaluated in all patients alive and on fixed-dose ART combinations for 24 months, and in a random sample of those treated for 12 months. Risk factors for virological failure (>1,000 copies/mL) and subtherapeutic antiretroviral (ARV) concentrations were investigated with multiple logistic regression. RESULTS: At 12 and 24 months of ART, 72% (n=701) and 70% (n=369) of patients, respectively, were alive and in care. About 8% and 38% of patients, respectively, were diagnosed with immunological failure; and 75% and 72% of patients, respectively, had undetectable HIV RNA (<400 copies/mL). Risk factors for virological failure (>1,000 copies/mL) were poor adherence, tuberculosis diagnosed after ART initiation, subtherapeutic NNRTI concentrations, general clinical symptoms, and lower weight than at baseline. About 14% of patients had low ARV plasma concentrations. Digestive symptoms and poor adherence to ART were risk factors for low ARV plasma concentrations. CONCLUSIONS: Efforts to improve both access to care and patient management to achieve better immunological and virological outcomes on ART are necessary to maximize the duration of first-line therapy

    Modelling imperfect adherence to HIV induction therapy

    Get PDF
    Abstract Background Induction-maintenance therapy is a treatment regime where patients are prescribed an intense course of treatment for a short period of time (the induction phase), followed by a simplified long-term regimen (maintenance). Since induction therapy has a significantly higher chance of pill fatigue than maintenance therapy, patients might take drug holidays during this period. Without guidance, patients who choose to stop therapy will each be making individual decisions, with no scientific basis. Methods We use mathematical modelling to investigate the effect of imperfect adherence during the inductive phase. We address the following research questions: 1. Can we theoretically determine the maximal length of a possible drug holiday and the minimal number of doses that must subsequently be taken while still avoiding resistance? 2. How many drug holidays can be taken during the induction phase? Results For a 180 day therapeutic program, a patient can take several drug holidays, but then has to follow each drug holiday with a strict, but fairly straightforward, drug-taking regimen. Since the results are dependent upon the drug regimen, we calculated the length and number of drug holidays for all fifteen protease-sparing triple-drug cocktails that have been approved by the US Food and Drug Administration. Conclusions Induction therapy with partial adherence is tolerable, but the outcome depends on the drug cocktail. Our theoretical predictions are in line with recent results from pilot studies of short-cycle treatment interruption strategies and may be useful in guiding the design of future clinical trials

    Twenty-four-week safety and tolerability of nevirapine vs. abacavir in combination with zidovudine/lamivudine as first-line antiretroviral therapy: a randomized double-blind trial (NORA).

    No full text
    OBJECTIVE: To compare the safety/tolerability of abacavir and nevirapine in HIV-infected adults starting antiretroviral (ARV) therapy in Uganda. METHODS: Twenty-four-week randomized double-blind trial conducted with 600 symptomatic ARV-naive adults with CD4 &lt;200 cells/mm(3) allocated to zidovudine/lamivudine plus 300 mg abacavir (A) and nevirapine placebo (n = 300) or 200 mg nevirapine (N) and abacavir placebo (n = 300) twice daily. The primary endpoint was any serious adverse event (SAE) definitely/probably or uncertain whether related to blinded nevirapine/abacavir. Secondary endpoints were adverse events leading to permanent discontinuation of blinded nevirapine/abacavir, and grade 4 events. RESULTS: Seventy-two per cent participants were women; 19% had WHO stage 4 disease; the median age was 37 years (range 18-66); the median baseline CD4 count was 99 cells/mm(3) (1-199). Ninety-five per cent completed 24 weeks: 4% died and 1% were lost to follow-up. Thirty-seven SAEs occurred on blinded drug in 36 participants. Twenty events [6 (2.0%) abacavir, 14 (4.7%) nevirapine participants] were considered serious adverse reactions definitely/probably/uncertain whether related to blinded abacavir/nevirapine [HR = 0.42 (95% CI 0.16-1.09) P = 0.06]. Only 2.0% of abacavir participants [six patients (0.7-4.3%)] experienced a suspected hypersensitivity reaction (HSR). In total 14 (4.7%) abacavir and 30 (10.0%) nevirapine participants discontinued blinded abacavir/nevirapine (P = 0.02): because of toxicity (6A, 15N; P = 0.07, all rash/possible HSR and/or hepatotoxicity), anti-tuberculosis therapy (6A, 13N), or for other reasons (2A, 2N). CONCLUSIONS: There was a trend towards a lower rate of serious adverse reactions in Ugandan adults with low CD4 starting ARV regimens with abacavir than with nevirapine. This suggests that abacavir could be used more widely in resource-limited settings without major safety concerns
    • …
    corecore