66 research outputs found

    Ageing with HIV: medication use and risk for potential drug-drug interactions

    Get PDF
    Objectives To compare the use of co-medication, the potential drug-drug interactions (PDDIs) and the effect on antiretroviral therapy (ART) tolerability and efficacy in HIV-infected individuals according to age, ≥50 years or <50 years. Methods All ART-treated participants were prospectively included once during a follow-up visit of the Swiss HIV Cohort Study. Information on any current medication was obtained by participant self-report and medical prescription history. The complete treatment was subsequently screened for PDDIs using a customized version of the Liverpool drug interaction database. Results Drug prescriptions were analysed for 1497 HIV-infected individuals: 477 age ≥50 and 1020 age <50. Older patients were more likely to receive one or more co-medications compared with younger patients (82% versus 61%; P < 0.001) and thus had more frequent PDDIs (51% versus 35%; P < 0.001). Furthermore, older patients tended to use a higher number of co-medications and certain therapeutic drug classes more often, such as cardiovascular drugs (53% versus 19%; P < 0.001), gastrointestinal medications (10% versus 6%; P = 0.004) and hormonal agents (6% versus 3%; P = 0.04). PDDIs with ART occurred mainly with cardiovascular drugs (27%), CNS agents (22%) and methadone (6%) in older patients and with CNS agents (27%), methadone (15%) and cardiovascular drugs (11%) in younger patients. The response to ART did not differ between the two groups. Conclusions The risk for PDDIs with ART increased in older patients who take more drugs than their younger HIV-infected counterparts. However, medication use in older and younger patients did not differ in terms of effect on antiretroviral tolerability and respons

    Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Get PDF
    Background: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score) provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods: Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident (< = 12 m) or older infection by 26 different algorithms. Incident infection rates (IIR) were calculated based on diagnostic sensitivity and specificity of each algorithm and the rule that the total of incident results is the sum of true-incident and false-incident results, which can be calculated by means of the pre-determined sensitivity and specificity. Results: The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline) and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models Conclusions: The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and sampling bias

    Population pharmacokinetic modelling and evaluation of different dosage regimens for darunavir and ritonavir in HIV-infected individuals

    Get PDF
    Objectives Darunavir is a protease inhibitor that is administered with low-dose ritonavir to enhance its bioavailability. It is prescribed at standard dosage regimens of 600/100 mg twice daily in treatment-experienced patients and 800/100 mg once daily in naive patients. A population pharmacokinetic approach was used to characterize the pharmacokinetics of both drugs and their interaction in a cohort of unselected patients and to compare darunavir exposure expected under alternative dosage regimens. Methods The study population included 105 HIV-infected individuals who provided darunavir and ritonavir plasma concentrations. Firstly, a population pharmacokinetic analysis for darunavir and ritonavir was conducted, with inclusion of patients' demographic, clinical and genetic characteristics as potential covariates (NONMEM®). Then, the interaction between darunavir and ritonavir was studied while incorporating levels of both drugs into different inhibitory models. Finally, model-based simulations were performed to compare trough concentrations (Cmin) between the recommended dosage regimen and alternative combinations of darunavir and ritonavir. Results A one-compartment model with first-order absorption adequately characterized darunavir and ritonavir pharmacokinetics. The between-subject variability in both compounds was important [coefficient of variation (CV%) 34% and 47% for darunavir and ritonavir clearance, respectively]. Lopinavir and ritonavir exposure (AUC) affected darunavir clearance, while body weight and darunavir AUC influenced ritonavir elimination. None of the tested genetic variants showed any influence on darunavir or ritonavir pharmacokinetics. The simulations predicted darunavir Cmin much higher than the IC50 thresholds for wild-type and protease inhibitor-resistant HIV-1 strains (55 and 550 ng/mL, respectively) under standard dosing in >98% of experienced and naive patients. Alternative regimens of darunavir/ritonavir 1200/100 or 1200/200 mg once daily also had predicted adequate Cmin (>550 ng/mL) in 84% and 93% of patients, respectively. Reduction of darunavir/ritonavir dosage to 600/50 mg twice daily led to a 23% reduction in average Cmin, still with only 3.8% of patients having concentrations below the IC50 for resistant strains. Conclusions The important variability in darunavir and ritonavir pharmacokinetics is poorly explained by clinical covariates and genetic influences. In experienced patients, treatment simplification strategies guided by drug level measurements and adherence monitoring could be propose

    Assessing the Paradox Between Transmitted and Acquired HIV Type 1 Drug Resistance Mutations in the Swiss HIV Cohort Study From 1998 to 2012

    Get PDF
    Background. Transmitted human immunodeficiency virus type 1 (HIV) drug resistance (TDR) mutations are transmitted from nonresponding patients (defined as patients with no initial response to treatment and those with an initial response for whom treatment later failed) or from patients who are naive to treatment. Although the prevalence of drug resistance in patients who are not responding to treatment has declined in developed countries, the prevalence of TDR mutations has not. Mechanisms causing this paradox are poorly explored. Methods. We included recently infected, treatment-naive patients with genotypic resistance tests performed ≤1 year after infection and before 2013. Potential risk factors for TDR mutations were analyzed using logistic regression. The association between the prevalence of TDR mutations and population viral load (PVL) among treated patients during 1997-2011 was estimated with Poisson regression for all TDR mutations and individually for the most frequent resistance mutations against each drug class (ie, M184V/L90M/K103N). Results. We included 2421 recently infected, treatment-naive patients and 5399 patients with no response to treatment. The prevalence of TDR mutations fluctuated considerably over time. Two opposing developments could explain these fluctuations: generally continuous increases in the prevalence of TDR mutations (odds ratio, 1.13; P = .010), punctuated by sharp decreases in the prevalence when new drug classes were introduced. Overall, the prevalence of TDR mutations increased with decreasing PVL (rate ratio [RR], 0.91 per 1000 decrease in PVL; P = .033). Additionally, we observed that the transmitted high-fitness-cost mutation M184V was positively associated with the PVL of nonresponding patients carrying M184V (RR, 1.50 per 100 increase in PVL; P < .001). Such association was absent for K103N (RR, 1.00 per 100 increase in PVL; P = .99) and negative for L90M (RR, 0.75 per 100 increase in PVL; P = .022). Conclusions. Transmission of antiretroviral drug resistance is temporarily reduced by the introduction of new drug classes and driven by nonresponding and treatment-naive patients. These findings suggest a continuous need for new drugs, early detection/treatment of HIV-1 infectio

    Origin of Minority Drug-Resistant HIV-1 Variants in Primary HIV-1 Infection

    Get PDF
    Background. Drug-resistant human immunodeficiency virus type 1 (HIV-1) minority variants (MVs) are present in some antiretroviral therapy (ART)-naive patients. They may result from de novo mutagenesis or transmission. To date, the latter has not been proven. Methods. MVs were quantified by allele-specific polymerase chain reaction in 204 acute or recent seroconverters from the Zurich Primary HIV Infection study and 382 ART-naive, chronically infected patients. Phylogenetic analyses identified transmission clusters. Results. Three lines of evidence were observed in support of transmission of MVs. First, potential transmitters were identified for 12 of 16 acute or recent seroconverters harboring M184V MVs. These variants were also detected in plasma and/or peripheral blood mononuclear cells at the estimated time of transmission in 3 of 4 potential transmitters who experienced virological failure accompanied by the selection of the M184V mutation before transmission. Second, prevalence between MVs harboring the frequent mutation M184V and the particularly uncommon integrase mutation N155H differed highly significantly in acute or recent seroconverters (8.2% vs 0.5%; P < .001). Third, the prevalence of less-fit M184V MVs is significantly higher in acutely or recently than in chronically HIV-1-infected patients (8.2% vs 2.5%; P = .004). Conclusions. Drug-resistant HIV-1 MVs can be transmitted. To what extent the origin—transmission vs sporadic appearance—of these variants determines their impact on ART needs to be further explore

    Assessing efficacy of different nucleos(t)ide backbones in NNRTI-containing regimens in the Swiss HIV Cohort Study

    Get PDF
    Background The most recommended NRTI combinations as first-line antiretroviral treatment for HIV-1 infection in resource-rich settings are tenofovir/emtricitabine, abacavir/lamivudine, tenofovir/lamivudine and zidovudine/lamivudine. Efficacy studies of these combinations also considering pill numbers, dosing frequencies and ethnicities are rare. Methods We included patients starting first-line combination ART (cART) with or switching from first-line cART without treatment failure to tenofovir/emtricitabine, abacavir/lamivudine, tenofovir/lamivudine and zidovudine/lamivudine plus efavirenz or nevirapine. Cox proportional hazards regression was used to investigate the effect of the different NRTI combinations on two primary outcomes: virological failure (VF) and emergence of NRTI resistance. Additionally, we performed a pill burden analysis and adjusted the model for pill number and dosing frequency. Results Failure events per treated patient for the four NRTI combinations were as follows: 19/1858 (tenofovir/emtricitabine), 9/387 (abacavir/lamivudine), 11/344 (tenofovir/lamivudine) and 45/1244 (zidovudine/lamivudine). Compared with tenofovir/emtricitabine, abacavir/lamivudine had an adjusted HR for having VF of 2.01 (95% CI 0.86-4.55), tenofovir/lamivudine 2.89 (1.22-6.88) and zidovudine/lamivudine 2.28 (1.01-5.14), whereas for the emergence of NRTI resistance abacavir/lamivudine had an HR of 1.17 (0.11-12.2), tenofovir/lamivudine 11.3 (2.34-55.3) and zidovudine/lamivudine 4.02 (0.78-20.7). Differences among regimens disappeared when models were additionally adjusted for pill burden. However, non-white patients compared with white patients and higher pill number per day were associated with increased risks of VF and emergence of NRTI resistance: HR of non-white ethnicity for VF was 2.85 (1.64-4.96) and for NRTI resistance 3.54 (1.20-10.4); HR of pill burden for VF was 1.41 (1.01-1.96) and for NRTI resistance 1.72 (0.97-3.02). Conclusions Although VF and emergence of resistance was very low in the population studied, tenofovir/emtricitabine appears to be superior to abacavir/lamivudine, tenofovir/lamivudine and zidovudine/lamivudine. However, it is unclear whether these differences are due to the substances as such or to an association of tenofovir/emtricitabine regimens with lower pill burde

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe
    corecore