5 research outputs found

    Abstracts from the Food Allergy and Anaphylaxis Meeting 2016

    Get PDF

    Generic-reference and generic-generic bioequivalence of forty-two, randomly-selected, on-market generic products of fourteen immediate-release oral drugs

    No full text
    Abstract Background The extents of generic-reference and generic-generic average bioequivalence and intra-subject variation of on-market drug products have not been prospectively studied on a large scale. Methods We assessed bioequivalence of 42 generic products of 14 immediate-release oral drugs with the highest number of generic products on the Saudi market. We conducted 14 four-sequence, randomized, crossover studies on the reference and three randomly-selected generic products of amlodipine, amoxicillin, atenolol, cephalexin, ciprofloxacin, clarithromycin, diclofenac, ibuprofen, fluconazole, metformin, metronidazole, paracetamol, omeprazole, and ranitidine. Geometric mean ratios of maximum concentration (Cmax) and area-under-the-concentration-time-curve, to last measured concentration (AUCT), extrapolated to infinity (AUCI), or truncated to Cmax time of reference product (AUCReftmax) were calculated using non-compartmental method and their 90% confidence intervals (CI) were compared to the 80.00%–125.00% bioequivalence range. Percentages of individual ratios falling outside the ±25% range were also determined. Results Mean (SD) age and body-mass-index of 700 healthy volunteers (28–80/study) were 32.2 (6.2) years and 24.4 (3.2) kg/m2, respectively. In 42 generic-reference comparisons, 100% of AUCT and AUCI CIs showed bioequivalence, 9.5% of Cmax CIs barely failed to show bioequivalence, and 66.7% of AUCReftmax CIs failed to show bioequivalence/showed bioinequivalence. Adjusting for 6 comparisons, 2.4% of AUCT and AUCI CIs and 21.4% of Cmax CIs failed to show bioequivalence. In 42 generic-generic comparisons, 2.4% of AUCT, AUCI, and Cmax CIs failed to show bioequivalence, and 66.7% of AUCReftmax CIs failed to show bioequivalence/showed bioinequivalence. Adjusting for 6 comparisons, 2.4% of AUCT and AUCI CIs and 14.3% of Cmax CIs failed to show bioequivalence. Average geometric mean ratio deviation from 100% was ≤3.2 and ≤5.4 percentage points for AUCI and Cmax, respectively, in both generic-reference and generic-generic comparisons. Individual generic/reference and generic/generic ratios, respectively, were within the ±25% range in >75% of individuals in 79% and 71% of the 14 drugs for AUCT and 36% and 29% for Cmax. Conclusions On-market generic drug products continue to be reference-bioequivalent and are bioequivalent to each other based on AUCT, AUCI, and Cmax but not AUCReftmax. Average deviation of geometric mean ratios and intra-subject variations are similar between reference-generic and generic-generic comparisons. Trial registration ClinicalTrials.gov identifier: NCT01344070 (registered April 3, 2011)

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)
    corecore