59 research outputs found

    Bayesian model selection in logistic regression for the detection of adverse drug reactions

    Full text link
    Motivation: Spontaneous adverse event reports have a high potential for detecting adverse drug reactions. However, due to their dimension, exploring such databases requires statistical methods. In this context, disproportionality measures are used. However, by projecting the data onto contingency tables, these methods become sensitive to the problem of co-prescriptions and masking effects. Recently, logistic regressions have been used with a Lasso type penalty to perform the detection of associations between drugs and adverse events. However, the choice of the penalty value is open to criticism while it strongly influences the results. Results: In this paper, we propose to use a logistic regression whose sparsity is viewed as a model selection challenge. Since the model space is huge, a Metropolis-Hastings algorithm carries out the model selection by maximizing the BIC criterion. Thus, we avoid the calibration of penalty or threshold. During our application on the French pharmacovigilance database, the proposed method is compared to well established approaches on a reference data set, and obtains better rates of positive and negative controls. However, many signals are not detected by the proposed method. So, we conclude that this method should be used in parallel to existing measures in pharmacovigilance.Comment: 7 pages, 3 figures, submitted to Biometrical Journa

    On the parametric maximum likelihood estimator for independent but non-identically distributed observations with application to truncated data

    Get PDF
    International audienceWe investigate the parametric maximum likelihood estimator for truncated data when the truncation value is different according to the observed individual or item. We extend Lehmann's proof (1983) of the asymptotic properties of the parametric maximum likelihood estimator in the case of independent non-identically distributed observations. Two cases are considered: either the number of distinct probability distribution functions that can be observed in the population from which the sample comes from is finite or this number is infinite. Sufficient conditions for consistency and asymptotic normality are provided for both cases

    Contribuer à l'amélioration du ciblage thérapeutique en oncologie par une nouvelle méthodologie des essais de phase II

    Get PDF
    On constate que la majorité des essais de phase III, conduits après des essais de phase II pourtant prometteurs, sont négatifs , la nouvelle thérapeutique se révélant finalement trop toxique ou insuffisamment efficace. L hétérogénéité de la population participant aux différentes phases de développement est une explication. Elle induirait une estimation erronée de la toxicité et, par dilution de l effet traitement, conduirait à arrêter l évaluation thérapeutique alors que peut être un sous-ensemble de cette population, définie à partir d une caractéristique particulière, pourrait en bénéficier.Dans cette thèse, nous proposons dans un premier temps une réflexion sur les aspects méthodologiques des essais de phase II qui permettraient d améliorer l identification précoce des thérapeutiques toxiques et des populations les plus sensibles et donc de ne planifier des essais de phase III que sur des populations encore mieux ciblées. Dans un second temps, nous présentons une nouvelle méthodologie d essai de phase II que nous avons développée pour prendre en compte l hétérogénéité de la population et son intérêt en pratique clinique courante. Avec cette méthode, qui est une extension du plan de Fleming à deux étapes, le développement des médicaments est moins fréquemment arrêté pour la population entière et moins de patients non sensibles à la nouvelle thérapeutique sont exposés à des molécules potentiellement toxiques, durant l étape 2 de l essai de phase II ou plus tard lors de l essai de phase III.The majority of phase III clinical trials, despite being conducted after promising phase II trials, are "negative," with the new therapy determined in the end to be too toxic or insufficiently efficacious. One explanation is the heterogeneity of the populations participating in various phases of development, which results in an erroneous estimation of the toxicity and thus a diluted therapeutic effect. This may lead to termination of evaluation of a therapy, even if a sub-population, defined by a particular characteristic, may stand to benefit from it. In this thesis, we propose a close examination of the methodological aspects of phase II trials which would permit improved early identification of toxic therapies and of responsive populations, so that phase III trials may be designed only with the best targeted populations in mind. We present as well a new phase II clinical trial methodology which we have developed to take into account trial population heterogeneity and its importance in current clinical practice. With this method, drug development is less often stopped for the entire phase II population and less non sensitive patients are exposed to toxic drugs in the second part of phase II trials, and next in phase III trials.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Estimating time-to-onset of adverse drug reactions from spontaneous reporting databases.

    Get PDF
    International audienceBACKGROUND: Analyzing time-to-onset of adverse drug reactions from treatment exposure contributes to meeting pharmacovigilance objectives, i.e. identification and prevention. Post-marketing data are available from reporting systems. Times-to-onset from such databases are right-truncated because some patients who were exposed to the drug and who will eventually develop the adverse drug reaction may do it after the time of analysis and thus are not included in the data. Acknowledgment of the developments adapted to right-truncated data is not widespread and these methods have never been used in pharmacovigilance. We assess the use of appropriate methods as well as the consequences of not taking right truncation into account (naĂŻve approach) on parametric maximum likelihood estimation of time-to-onset distribution. METHODS: Both approaches, naĂŻve or taking right truncation into account, were compared with a simulation study. We used twelve scenarios for the exponential distribution and twenty-four for the Weibull and log-logistic distributions. These scenarios are defined by a set of parameters: the parameters of the time-to-onset distribution, the probability of this distribution falling within an observable values interval and the sample size. An application to reported lymphoma after anti TNF-Âż treatment from the French pharmacovigilance is presented. RESULTS: The simulation study shows that the bias and the mean squared error might in some instances be unacceptably large when right truncation is not considered while the truncation-based estimator shows always better and often satisfactory performances and the gap may be large. For the real dataset, the estimated expected time-to-onset leads to a minimum difference of 58 weeks between both approaches, which is not negligible. This difference is obtained for the Weibull model, under which the estimated probability of this distribution falling within an observable values interval is not far from 1. CONCLUSIONS: It is necessary to take right truncation into account for estimating time-to-onset of adverse drug reactions from spontaneous reporting databases

    Détection automatique de signaux en pharmacovigilance (Approche statistique fondée sur les comparaisons multiples)

    No full text
    LE KREMLIN-B.- PARIS 11-BU MĂ©d (940432101) / SudocSudocFranceF

    New adaptive lasso approaches for variable selection in automated pharmacovigilance signal detection

    No full text
    International audienceAdverse effects of drugs are often identified after market introduction. Post-marketing pharmacovigilance aims to detect them as early as possible and relies on spontaneous reporting systems collecting suspicious cases. Signal detection tools have been developed to mine these large databases and counts of reports are analysed with disproportionality methods. To address disproportionality method biases, recent methods apply to individual observations taking into account all exposures for the same patient. In particular, the logistic lasso provides an efficient variable selection framework, yet the choice of the regularization parameter is a challenging issue and the lasso variable selection may give inconsistent results

    Instrumental variable analysis in the context of dichotomous outcome and exposure with a numerical experiment in pharmacoepidemiology

    No full text
    International audienceBACKGROUND:In pharmacoepidemiology, the prescription preference-based instrumental variables (IV) are often used with linear models to solve the endogeneity due to unobserved confounders even when the outcome and the endogenous treatment are dichotomous variables. Using this instrumental variable, we proceed by Monte-Carlo simulations to compare the IV-based generalized method of moment (IV-GMM) and the two-stage residual inclusion (2SRI) method in this context.METHODS:We established the formula allowing us to compute the instrument's strength and the confounding level in the context of logistic regression models. We then varied the instrument's strength and the confounding level to cover a large range of scenarios in the simulation study. We also explore two prescription preference-based instruments.RESULTS:We found that the 2SRI is less biased than the other methods and yields satisfactory confidence intervals. The proportion of previous patients of the same physician who were prescribed the treatment of interest displayed a good performance as a proxy of the physician's preference instrument.CONCLUSIONS:This work shows that when analysing real data with dichotomous outcome and exposure, appropriate 2SRI estimation could be used in presence of unmeasured confounding

    Analyzing cohort studies with interval-censored data: A new model-based linear rank-type test

    No full text
    International audienceTo compare two or more survival distributions with interval-censored data, various nonparametric tests have been proposed. Some are based on the (Formula presented.) -family introduced by Harrington and Fleming (1991) that allows flexibility for situations in which the hazard ratio decreases monotonically to unity. However, it is unclear how to choose the appropriate value of the parameter (Formula presented.). In this work, we propose a novel linear rank-type test for analyzing interval-censored data that derived from a proportional reversed hazard model. We show its relationship with decreasing hazard ratio. This test statistic provides an alternative to the (Formula presented.) -based test statistics by bypassing the choice of the (Formula presented.) parameter. Simulation results show its good behavior. Two studies on breast cancer and drug users illustrate its practical uses and highlight findings that would have been overlooked if other tests had been used. The test is easy to implement with standard software and can be used for a wide range of situations with interval-censored data to test the equality of survival distributions between two or more independent groups

    Accounting for indirect protection in the benefit–risk ratio estimation of rotavirus vaccination in children under the age of 5 years, France, 2018

    No full text
    International audienceBackgroundRotavirus is a major cause of severe gastroenteritis in children worldwide. The disease burden has been substantially reduced in countries where rotavirus vaccines are used. Given the risk of vaccine-induced intussusception, the benefit-risk balance of rotavirus vaccination has been assessed in several countries, however mostly without considering indirect protection effects.AimWe performed a benefit-risk analysis of rotavirus vaccination accounting for indirect protection in France among the 2018 population of children under the age of 5 years.MethodsTo incorporate indirect protection effects in the benefit formula, we adopted a pseudo-vaccine approach involving mathematical approximation and used a simulation design to provide uncertainty intervals. We derived background incidence distributions from quasi-exhaustive health claim data. We examined different coverage levels and assumptions regarding the waning effects and intussusception case fatality rate.ResultsWith the current vaccination coverage of < 10%, the indirect effectiveness was estimated at 6.4% (+/- 0.4). For each hospitalisation for intussusception, 288.2 (95% uncertainty interval: (173.8-480.0)) hospitalisations for rotavirus gastroenteritis were prevented. Should 90% of infants be vaccinated, indirect effectiveness would reach 57.9% (+/- 3.7) and the benefit-risk ratio would be 297.6 (95% uncertainty interval: 179.4-497.3). Indirect protection accounted for almost half of the prevented rotavirus gastroenteritis cases across all coverage levels. The balance remained in favour of the vaccine even in a scenario with a high assumption for intussusception case fatality.ConclusionsThese findings contribute to a better assessment of the rotavirus vaccine benefit-risk balance
    • …
    corecore