25 research outputs found

    Estimating time-to-onset of adverse drug reactions from spontaneous reporting databases.

    Get PDF
    International audienceBACKGROUND: Analyzing time-to-onset of adverse drug reactions from treatment exposure contributes to meeting pharmacovigilance objectives, i.e. identification and prevention. Post-marketing data are available from reporting systems. Times-to-onset from such databases are right-truncated because some patients who were exposed to the drug and who will eventually develop the adverse drug reaction may do it after the time of analysis and thus are not included in the data. Acknowledgment of the developments adapted to right-truncated data is not widespread and these methods have never been used in pharmacovigilance. We assess the use of appropriate methods as well as the consequences of not taking right truncation into account (naĂŻve approach) on parametric maximum likelihood estimation of time-to-onset distribution. METHODS: Both approaches, naĂŻve or taking right truncation into account, were compared with a simulation study. We used twelve scenarios for the exponential distribution and twenty-four for the Weibull and log-logistic distributions. These scenarios are defined by a set of parameters: the parameters of the time-to-onset distribution, the probability of this distribution falling within an observable values interval and the sample size. An application to reported lymphoma after anti TNF-Âż treatment from the French pharmacovigilance is presented. RESULTS: The simulation study shows that the bias and the mean squared error might in some instances be unacceptably large when right truncation is not considered while the truncation-based estimator shows always better and often satisfactory performances and the gap may be large. For the real dataset, the estimated expected time-to-onset leads to a minimum difference of 58 weeks between both approaches, which is not negligible. This difference is obtained for the Weibull model, under which the estimated probability of this distribution falling within an observable values interval is not far from 1. CONCLUSIONS: It is necessary to take right truncation into account for estimating time-to-onset of adverse drug reactions from spontaneous reporting databases

    Net Efficacy Adjusted for Risk (NEAR): A Simple Procedure for Measuring Risk:Benefit Balance

    Get PDF
    BACKGROUND: Although several mathematical models have been proposed to assess the risk:benefit of drugs in one measure, their use in practice has been rather limited. Our objective was to design a simple, easily applicable model. In this respect, measuring the proportion of patients who respond favorably to treatment without being affected by adverse drug reactions (ADR) could be a suitable endpoint. However, remarkably few published clinical trials report the data required to calculate this proportion. As an approach to the problem, we calculated the expected proportion of this type of patients. METHODOLOGY/PRINCIPAL FINDINGS: Theoretically, responders without ADR may be obtained by multiplying the total number of responders by the total number of subjects that did not suffer ADR, and dividing the product by the total number of subjects studied. When two drugs are studied, the same calculation may be repeated for the second drug. Then, by constructing a 2 x 2 table with the expected frequencies of responders with and without ADR, and non-responders with and without ADR, the odds ratio and relative risk with their confidence intervals may be easily calculated and graphically represented on a logarithmic scale. Such measures represent "net efficacy adjusted for risk" (NEAR). We assayed the model with results extracted from several published clinical trials or meta-analyses. On comparing our results with those originally reported by the authors, marked differences were found in some cases, with ADR arising as a relevant factor to balance the clinical benefit obtained. The particular features of the adverse reaction that must be weighed against benefit is discussed in the paper. CONCLUSION: NEAR representing overall risk-benefit may contribute to improving knowledge of drug clinical usefulness. As most published clinical trials tend to overestimate benefits and underestimate toxicity, our measure represents an effort to change this trend

    Stat Methods Med Res

    No full text
    BACKGROUND: All methods routinely used to generate safety signals from pharmacovigilance databases rely on disproportionality analyses of counts aggregating patients' spontaneous reports. Recently, it was proposed to analyze individual spontaneous reports directly using Bayesian lasso logistic regressions. Nevertheless, this raises the issue of choosing an adequate regularization parameter in a variable selection framework while accounting for computational constraints due to the high dimension of the data. PURPOSE: Our main objective is to propose a method, which exploits the subsampling idea from Stability Selection, a variable selection procedure combining subsampling with a high-dimensional selection algorithm, and adapts it to the specificities of the spontaneous reporting data, the latter being characterized by their large size, their binary nature and their sparsity. MATERIALS AND METHOD: Given the large imbalance existing between the presence and absence of a given adverse event, we propose an alternative subsampling scheme to that of Stability Selection resulting in an over-representation of the minority class and a drastic reduction in the number of observations in each subsample. Simulations are used to help define the detection threshold as regards the average proportion of false signals. They are also used to compare the performances of the proposed sampling scheme with that originally proposed for Stability Selection. Finally, we compare the proposed method to the gamma Poisson shrinker, a disproportionality method, and to a lasso logistic regression approach through an empirical study conducted on the French national pharmacovigilance database and two sets of reference signals. RESULTS: Simulations show that the proposed sampling strategy performs better in terms of false discoveries and is faster than the equiprobable sampling of Stability Selection. The empirical evaluation illustrates the better performances of the proposed method compared with gamma Poisson shrinker and the lasso in terms of number of reference signals retrieved

    Spontaneous Reporting System Modelling for Data Mining Methods Evaluation in Pharmacovigilance

    No full text
    International audienceThe pharmacovigilance aims at detecting adverse effects of marketed drugs. It is based on the spontaneous reporting of events that are supposed to be adverse effects of drugs. The Spontaneous Reporting System (SRS) is supplying huge databases that pharma-covigilance experts cannot exhaustively exploit without any data mining tools. Data mining methods have been proposed in the literature but none of them is the object of a consensus in terms of applicability and efficiency. It is especially due to the difficulties to evaluate the methods on real data. In this context, the aim of this paper is to propose the SRS modelling in order to simulate realistic data that would permit to complete the methods evaluation and comparison , with the perspective to help in defining surveillance strategies. In fact, as the status of the drug-event relations is known in the simulated dataset, the signal generated by the data mining methods can be labelled as " true " or " false ". Spontaneous Reporting process is viewed as a Poisson process depending on the drugs exposure frequency, on the delay from the drugs launch, on the adverse events background incidence and seriousness and on a reporting probability. This reporting probability , quantitatively unknown, is derived from the qualitative knowledge found in literature and expressed by experts. This knowledge is represented and exploited by means of a fuzzy characterisation of variables and a set of fuzzy rules. Simulated data are described and two Bayesian data mining methods are applied to illustrate the kind of information, on methods performances, that can be derived from the SRS modelling and from the data simulation

    Drug Saf

    No full text
    Background Pregnant women are largely exposed to medications. However, knowledge is lacking about their effects on pregnancy and the fetus. Objective This study sought to evaluate the potential of high-dimensional propensity scores and high-dimensional disease risk scores for automated signal detection in pregnant women from medico-administrative databases in the context of drug-induced prematurity. Methods We used healthcare claims and hospitalization discharges of a 1/97th representative sample of the French population. We tested the association between prematurity and drug exposure during the trimester before delivery, for all drugs prescribed to at least five pregnancies. We compared different strategies (1) for building the two scores, including two machine-learning methods and (2) to account for these scores in the final logistic regression models: adjustment, weighting, and matching. We also proposed a new signal detection criterion derived from these scores: the p value relative decrease. Evaluation was performed by assessing the relevance of the signals using a literature review and clinical expertise. Results Screening 400 drugs from a cohort of 57,407 pregnancies, we observed that choosing between the two machine-learning methods had little impact on the generated signals. Score adjustment performed better than weighting and matching. Using the p value relative decrease efficiently filtered out spurious signals while maintaining a number of relevant signals similar to score adjustment. Most of the relevant signals belonged to the psychotropic class with benzodiazepines, antidepressants, and antipsychotics. Conclusions Mining complex healthcare databases with statistical methods from the high-dimensional inference field may improve signal detection in pregnant women
    corecore