36 research outputs found

    Antipsychotics and Torsadogenic Risk: Signals Emerging from the US FDA Adverse Event Reporting System Database

    Get PDF
    Background: Drug-induced torsades de pointes (TdP) and related clinical entities represent a current regulatory and clinical burden. Objective: As part of the FP7 ARITMO (Arrhythmogenic Potential of Drugs) project, we explored the publicly available US FDA Adverse Event Reporting System (FAERS) database to detect signals of torsadogenicity for antipsychotics (APs). Methods: Four groups of events in decreasing order of drug-attributable risk were identified: (1) TdP, (2) QT-interval abnormalities, (3) ventricular fibrillation/tachycardia, and (4) sudden cardiac death. The reporting odds ratio (ROR) with 95 % confidence interval (CI) was calculated through a cumulative analysis from group 1 to 4. For groups 1+2, ROR was adjusted for age, gender, and concomitant drugs (e.g., antiarrhythmics) and stratified for AZCERT drugs, lists I and II (http://www.azcert.org, as of June 2011). A potential signal of torsadogenicity was defined if a drug met all the following criteria: (a) four or more cases in group 1+2; (b) significant ROR in group 1+2 that persists through the cumulative approach; (c) significant adjusted ROR for group 1+2 in the stratum without AZCERT drugs; (d) not included in AZCERT lists (as of June 2011). Results: Over the 7-year period, 37 APs were reported in 4,794 cases of arrhythmia: 140 (group 1), 883 (group 2), 1,651 (group 3), and 2,120 (group 4). Based on our criteria, the following potential signals of torsadogenicity were found: amisulpride (25 cases; adjusted ROR in the stratum without AZCERT drugs = 43.94, 95 % CI 22.82-84.60), cyamemazine (11; 15.48, 6.87-34.91), and olanzapine (189; 7.74, 6.45-9.30). Conclusions: This pharmacovigilance analysis on the FAERS found 3 potential signals of torsadogenicity for drugs previously unknown for this risk

    Bio-analytical Assay Methods used in Therapeutic Drug Monitoring of Antiretroviral Drugs-A Review

    Get PDF

    Non-AIDS defining cancers in the D:A:D Study-time trends and predictors of survival : a cohort study

    Get PDF
    BACKGROUND:Non-AIDS defining cancers (NADC) are an important cause of morbidity and mortality in HIV-positive individuals. Using data from a large international cohort of HIV-positive individuals, we described the incidence of NADC from 2004-2010, and described subsequent mortality and predictors of these.METHODS:Individuals were followed from 1st January 2004/enrolment in study, until the earliest of a new NADC, 1st February 2010, death or six months after the patient's last visit. Incidence rates were estimated for each year of follow-up, overall and stratified by gender, age and mode of HIV acquisition. Cumulative risk of mortality following NADC diagnosis was summarised using Kaplan-Meier methods, with follow-up for these analyses from the date of NADC diagnosis until the patient's death, 1st February 2010 or 6 months after the patient's last visit. Factors associated with mortality following NADC diagnosis were identified using multivariable Cox proportional hazards regression.RESULTS:Over 176,775 person-years (PY), 880 (2.1%) patients developed a new NADC (incidence: 4.98/1000PY [95% confidence interval 4.65, 5.31]). Over a third of these patients (327, 37.2%) had died by 1st February 2010. Time trends for lung cancer, anal cancer and Hodgkin's lymphoma were broadly consistent. Kaplan-Meier cumulative mortality estimates at 1, 3 and 5 years after NADC diagnosis were 28.2% [95% CI 25.1-31.2], 42.0% [38.2-45.8] and 47.3% [42.4-52.2], respectively. Significant predictors of poorer survival after diagnosis of NADC were lung cancer (compared to other cancer types), male gender, non-white ethnicity, and smoking status. Later year of diagnosis and higher CD4 count at NADC diagnosis were associated with improved survival. The incidence of NADC remained stable over the period 2004-2010 in this large observational cohort.CONCLUSIONS:The prognosis after diagnosis of NADC, in particular lung cancer and disseminated cancer, is poor but has improved somewhat over time. Modifiable risk factors, such as smoking and low CD4 counts, were associated with mortality following a diagnosis of NADC

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe

    Int J Drug Policy

    No full text
    Drug checking is a service for people who use drugs that includes product analysis and an individual interview including results feedback and harm reduction counselling. It uses different analytical methods but few studies demonstrate their value in current practice. The main objective of this work is to compare the analytical performance of IR spectroscopy to laboratory reference method in the context of drug checking in a harm reduction centre. The secondary objectives are to carry out a description of the people who use drugs requesting a product analysis, and to compare the assumed compositions of products purchased with their real compositions. During 2018, all requests for drug testing analysis were included for on-site analysis by IR spectrometry in a harm reduction center and verified by the reference method (UPLC-HRMS) at Bordeaux University Hospital Center. Socioeconomic and product data were also collected. One hundred and thirty-six samples were collected. The results obtained with IR and UPLC-HRMS were compared. IR spectrometry results did not match with reference method in 8 % (n=11) of cases, corresponding to blotters, cannabis and some psychoactive substances present in mixture or in small quantities. Among the products collected, only 5.1 % (n=7) did not correspond to the declared product, either alone or with adulterants. The IR spectrometer allows a simple and rapid detection of at least one molecule, most often the one of interest. However, it is limited to powder and tablet type matrices and is not suitable for blotters, cannabis, mixed or low content substances for which high resolution mass spectrometry remains the reference method. � 2020 Elsevier B.V

    Torsadogenic risk of antipsychotics : combining adverse event reports with drug utilization data across Europe

    Get PDF
    Antipsychotics (APs) have been associated with risk of torsade de Pointes (TdP). This has important public health implications. Therefore, (a) we exploited the public FDA Adverse Event Reporting System (FAERS) to characterize their torsadogenic profile; (b) we collected drug utilization data from 12 European Countries to assess the population exposure over the 2005-2010 period. FAERS data (2004-2010) were analyzed based on the following criteria: (1) ≥ 4 cases of TdP/QT abnormalities; (2) Significant Reporting Odds Ratio, ROR [Lower Limit of the 95% confidence interval>1], for TdP/QT abnormalities, adjusted and stratified (Arizona CERT drugs as effect modifiers); (3) ≥ 4 cases of ventricular arrhythmia/sudden cardiac death (VA/SCD); (4) Significant ROR for VA/SCD; (5) Significant ROR, combined by aggregating TdP/QT abnormalities with VA and SCD. Torsadogenic signals were characterized in terms of signal strength: from Group A (very strong torsadogenic signal: all criteria fulfilled) to group E (unclear/uncertain signal: only 2/5 criteria). Consumption data were retrieved from 12 European Countries and expressed as defined daily doses per 1,000 inhabitants per day (DID). Thirty-five antipsychotics met at least one criterium: 9 agents were classified in Group A (amisulpride, chlorpromazine, clozapine, cyamemazine, haloperidol, olanzapine, quetiapine, risperidone, ziprasidone). In 2010, the overall exposure to antipsychotics varied from 5.94 DID (Estonia) to 13.99 (France, 2009). Considerable increment of Group A agents was found in several Countries (+3.47 in France): the exposure to olanzapine increased across all Countries (+1.84 in France) and peaked 2.96 in Norway; cyamemazine was typically used only in France (2.81 in 2009). Among Group B drugs, levomepromazine peaked 3.78 (Serbia); fluphenazine 1.61 (Slovenia). This parallel approach through spontaneous reporting and drug utilization analyses highlighted drug- and Country-specific scenarios requiring potential regulatory consideration: levomepromazine (Serbia), fluphenazine (Slovenia), olanzapine (across Europe), cyamemazine (France). This synergy should be encouraged to support future pharmacovigilance activities
    corecore