22 research outputs found

    Sur des estimateurs et des tests non-paramétriques pour des distributions et copules conditionnelles

    Get PDF
    Pour modéliser un vecteur aléatoire en présence d'une co-variable, on peut d'abord faire appel à la fonction de répartition conditionnelle. En effet, cette dernière contient toute l'information ayant trait au comportement du vecteur étant donné une valeur prise par la co-variable. Il peut aussi être commode de séparer l'étude du comportement conjoint du vecteur de celle du comportement individuel de chacune de ses composantes. Pour ce faire, on utilise la copule conditionnelle, qui caractérise complètement la dépendance conditionnelle régissant les différentes associations entre les variables. Dans chacun des cas, la mise en oeuvre d'une stratégie d'estimation et d'inférence s'avère une étape essentielle à leur utilisant en pratique. Lorsqu'aucune information n'est disponible a priori quant à un choix éventuel de modèle, il devient pertinent d'opter pour des méthodes non-paramétriques. Le premier article de cette thèse, co-écrit par Jean-François Quessy et moi-même, propose une façon de ré-échantillonner des estimateurs non-paramétriques pour des distributions conditionnelles. Cet article a été publié dans la revue Statistics and Computing. En autres choses, nous y montrons comment obtenir des intervalles de confiance pour des statistiques s'écrivant en terme de la fonction de répartition conditionnelle. Le second article de cette thèse, co-écrit par Taoufik Bouezmarni, Jean-François Quessy et moi-même, s'affaire à étudier deux estimateurs non-paramétriques de la copule conditionnelles, proposés par Gijbels et coll. en présence de données sérielles. Cet article a été soumis dans la revue Statistics and Probability Letters. Nous identifions la distribution asymptotique de chacun de ces estimateurs pour des données mélangeantes. Le troisième article de cette thèse, co-écrit par Taoufik Bouezmarni, Jean-François Quessy et moi-même, propose une nouvelle façon d'étudier les relations de causalité entre deux séries chronologiques. Cet article a été soumis dans la revue Electronic Journal of Statistics. Dans cet article, nous utilisons la copule conditionnelle pour caractériser une version locale de la causalité au sens de Granger. Puis, nous proposons des mesures de causalité basées sur la copule conditionnelle. Le quatrième article de cette thèse, co-écrit par Taoufik Bouezmarni, Anouar El Ghouch et moi-même, propose une méthode qui permette d'estimer adéquatement la copule conditionnelle en présence de données incomplètes. Cet article a été soumis dans la revue Scandinavian Journal of Statistics. Les propriétés asymptotiques de l'estimateur proposé y sont aussi étudiées. Finalement, la dernière partie de cette thèse contient un travail inédit, qui porte sur la mise en oeuvre de tests statistiques permettant de déterminer si deux copules conditionnelles sont concordantes. En plus d'y présenter des résultats originaux, cette étude illustre l'utilité des techniques de ré-échantillonnage développées dans notre premier article

    Determination of Confidence Intervals in Non-normal Data: Application of the Bootstrap to Cocaine Concentration in Femoral Blood

    Get PDF
    Calculating the confidence interval is a common procedure in data analysis and is readily obtained from normally distributed populations with the familiar formula. However, when working with non-normally distributed data, determining the confidence interval is not as obvious. For this type of data, there are fewer references in the literature, and they are much less accessible. We describe, in simple language, the percentile and bias-corrected and accelerated variations of the bootstrap method to calculate confidence intervals. This method can be applied to a wide variety of parameters (mean, median, slope of a calibration curve, etc.) and is appropriate for normal and non-normal data sets. As a worked example, the confidence interval around the median concentration of cocaine in femoral blood is calculated using bootstrap techniques. The median of the non-toxic concentrations was 46.7 ng/mL with a 95% confidence interval of 23.9–85.8 ng/mL in the non-normally distributed set of 45 postmortem cases. This method should be used to lead to more statistically sound and accurate confidence intervals for non-normally distributed populations, such as reference values of therapeutic and toxic drug concentration, as well as situations of truncated concentration values near the limit of quantification or cutoff of a method

    Procedure for the selection and validation of a calibration model: I —Description and Application

    Get PDF
    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional “fit and check the QCs accuracy” method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/(x^2) was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer–von Mises or Kolmogorov–Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS/MS results for the quantification of cocaine and naltrexone

    Procedure for the selection and validation of a calibration model: II —Theoretical basis

    Get PDF
    In the first part of this paper (I — Description and application), an automated, stepwise and analyst independent process for the selection and validation of calibration models was put forward and applied to two model analytes. This second part presents the mathematical reasoning and experimental work underlying the selection of the different components of this procedure. Different replicate analysis designs (intra/inter-day and intra/interextraction) were tested and their impact on test results was evaluated. For most methods, the use of intra-day/intra-extraction measurement replicates is recommended due to its decreased variability. This process should be repeated three times during the validation process in order to assess the time stability of the underlying model. Strategies for identification of heteroscedasticity and their potential weaknesses were examined and a unilateral F-test using the lower limit of quantification and upper limit of quantification replicates was chosen. Three different options for model selection were examined and tested: ANOVA lack-of-fit (LOF), partial F-test and significance of the second-order term. Examination of mathematical assumptions for each test and LC-MS/MS experimental results lead to selection of the partial F-test as being the most suitable. The advantages and drawbacks of ANOVA-LOF, examination of the standardized residuals graph and residuals normality testing (Kolmogorov-Smirnov or Cramer-Von Mises) for validation of the calibration model were examined with the last option proving the best in light of its robustness and accuracy. Choosing the correct calibration model improves QC accuracy, and simulations have shown that this automated scheme has a much better performance than a more traditional method of fitting with increasingly complex models until QC accuracies pass below a threshold

    Development and Validation of a Predictive Model of Pain Modulation Profile to Guide Chronic Pain Treatment: A Study Protocol

    Get PDF
    Abstract : Introduction: Quantitative sensory testing is frequently used in research to assess endogenous pain modulation mechanisms, such as Temporal Summation (TS) and Conditioned Pain Modulation (CPM), reflecting excitatory and inhibitory mechanisms, respectively. Numerous studies found that a dysregulation of these mechanisms is associated with chronic pain conditions. In turn, such a patient's “profile” (increased TS and/or weakened CPM) could be used to recommend different pharmacological treatments. However, the procedure to evaluate these mechanisms is time-consuming and requires expensive equipment that is not available in the clinical setting. In this study, we aim to identify psychological, physiological and socio-demographic markers that could serve as proxies to allow healthcare professionals to identify these pain phenotypes in clinic, and consequently optimize pharmacological treatments. Method: We aim to recruit a healthy participant cohort (n = 360) and a chronic pain patient cohort (n = 108). Independent variables will include psychological questionnaires, pain measurements, physiological measures and sociodemographic characteristics. Dependent variables will include TS and CPM, which will be measured using quantitative sensory testing in a single session. We will evaluate one prediction model and two validation models (for healthy and chronic pain participants) using multiple regression analysis between TS/CPM and our independent variables. The significance thresholds will be set at p = 0.05, respectively. Perspectives: This study will allow us to develop a predictive model to compute the pain modulation profile of individual patients based on their biopsychosocial characteristics. The development of the predictive model is the first step toward the overarching goal of providing clinicians with a set of quick and cheap tests, easily applicable in clinical practice to orient pharmacological treatments

    Qualitative method validation and uncertainty evaluation via the binary output: I – Validation guidelines and theoretical foundations

    Get PDF
    Qualitative methods have an important place in forensic toxicology, filling central needs in, amongst others, screening and analyses linked to per se legislation. Nevertheless, bioanalytical method validation guidelines either do not discuss this type of method, or describe method validation procedures ill adapted to qualitative methods. The output of qualitative methods are typically categorical, binary results such as “presence”/“absence” or “above cut-off”/“below cut-off”. Since the goal of any method validation is to demonstrate fitness for use under production conditions, guidelines should evaluate performance by relying on the discrete results, instead of the continuous measurements obtained (e.g. peak height, area ratio). We have developed a tentative validation guideline for decision point qualitative methods by modeling measurements and derived binary results behaviour, based on the literature and experimental results. This preliminary guideline was applied to an LC-MS/MS method for 40 analytes, each with a defined cut-off concentration. The standard deviation of measurements at cut-off ( ) was estimated based on 10 spiked samples. Analytes were binned according to their %RSD (8.00%, 16.5%, 25.0%). Validation parameters calculated from the analysis of 30 samples spiked at and (false negative rate, false positive rate, selectivity rate, sensitivity rate and reliability rate) showed a surprisingly high failure rate. Overall, 13 out of the 40 analytes were not considered validated. Subsequent examination found that this was attributable to an appreciable shift in the standard deviation of the area ratio between different batches of samples analyzed. Keeping this behaviour in mind when setting the validation concentrations, the developed guideline can be used to validate qualitative decision point methods, relying on binary results for performance evaluation and taking into account measurement uncertainty. An application of this method validation scheme is presented in the accompanying paper (II – Application to a multi-analyte LC-MS/MS method for oral fluid)

    Qualitative threshold method validation and uncertainty evaluation: A theoretical framework and application to a 40 analytes liquid chromatography–tandem mass spectrometry method

    Get PDF
    Qualitative methods hold an important place in drug testing, filling central needs in screening and analyses, among others, linked to per se legislation. Nevertheless, the bioanalytical method validation guidelines do not discuss this type of method or describe method validation procedures ill‐adapted to qualitative methods. The output of qualitative methods are typically categorical, binary results, such as presence/absence or above cut‐off/below cut‐off. As the goal of any method validation is to demonstrate fitness for use under production conditions, qualitative validation guidelines should evaluate performance based on discrete, binary results instead of the continuous measurements obtained from the instrument (e.g. area). A tentative validation guideline for threshold qualitative methods was developed by in silico modelling of measurements and derived binary results. This preliminary guideline was applied to a liquid chromatography–tandem mass spectrometry method for 40 analytes, each with a defined threshold concentration. Validation parameters calculated from the analysis of 30 samples spiked above and below the threshold concentration (false negative rate, false positive rate, selectivity rate, sensitivity rate and reliability rate) showed a surprisingly high failure rate. Overall, 13 out of the 40 analytes were not considered validated. A subsequent examination found that this was attributable to an appreciable shift in the standard deviation of the area ratio on a day‐to‐day basis, a previously undescribed and unaccounted‐for behaviour in the qualitative threshold method validation literature. Consequently, the developed guideline was modified and used to validate a qualitative threshold method, based on the binary results for performance evaluation and incorporating measurement uncertainty

    Promoting healthy eating in early pregnancy in individuals at risk of gestational diabetes mellitus: does it improve glucose homeostasis? A study protocol for a randomized control trial

    Get PDF
    BackgroundHealthy eating during pregnancy has favorable effects on glycemic control and is associated with a lower risk of gestational diabetes mellitus (GDM). According to Diabetes Canada, there is a need for an effective and acceptable intervention that could improve glucose homeostasis and support pregnant individuals at risk for GDM.AimsThis unicentric randomized controlled trial (RCT) aims to evaluate the effects of a nutritional intervention initiated early in pregnancy, on glucose homeostasis in 150 pregnant individuals at risk for GDM, compared to usual care.MethodsPopulation: 150 pregnant individuals ≥18 years old, at ≤14 weeks of pregnancy, and presenting ≥1 risk factor for GDM according to Diabetes Canada guidelines. Intervention: The nutritional intervention initiated in the first trimester is based on the health behavior change theory during pregnancy and on Canada’s Food Guide recommendations. It includes (1) four individual counseling sessions with a registered dietitian using motivational interviewing (12, 18, 24, and 30 weeks), with post-interview phone call follow-ups, aiming to develop and achieve S.M.A.R.T. nutritional objectives (specific, measurable, attainable, relevant, and time-bound); (2) 10 informative video clips on healthy eating during pregnancy developed by our team and based on national guidelines, and (3) a virtual support community via a Facebook group. Control: Usual prenatal care. Protocol: This RCT includes three on-site visits (10–14, 24–26, and 34–36 weeks) during which a 2-h oral glucose tolerance test is done and blood samples are taken. At each trimester and 3 months postpartum, participants complete web-based questionnaires, including three validated 24-h dietary recalls to assess their diet quality using the Healthy Eating Food Index 2019. Primary outcome: Difference in the change in fasting blood glucose (from the first to the third trimester) between groups. This study has been approved by the Ethics Committee of the Centre de recherche du CHU de Québec-Université Laval.DiscussionThis RCT will determine whether a nutritional intervention initiated early in pregnancy can improve glucose homeostasis in individuals at risk for GDM and inform Canadian stakeholders on improving care trajectories and policies for pregnant individuals at risk for GDM.Clinical trial registrationhttps://clinicaltrials.gov/study/NCT05299502, NCT0529950

    Sur des estimateurs et des tests non-paramétriques pour des distributions et copules conditionnelles

    No full text
    Pour modéliser un vecteur aléatoire en présence d'une co-variable, on peut d'abord faire appel à la fonction de répartition conditionnelle. En effet, cette dernière contient toute l'information ayant trait au comportement du vecteur étant donné une valeur prise par la co-variable. Il peut aussi être commode de séparer l'étude du comportement conjoint du vecteur de celle du comportement individuel de chacune de ses composantes. Pour ce faire, on utilise la copule conditionnelle, qui caractérise complètement la dépendance conditionnelle régissant les différentes associations entre les variables. Dans chacun des cas, la mise en oeuvre d'une stratégie d'estimation et d'inférence s'avère une étape essentielle à leur utilisant en pratique. Lorsqu'aucune information n'est disponible a priori quant à un choix éventuel de modèle, il devient pertinent d'opter pour des méthodes non-paramétriques. Le premier article de cette thèse, co-écrit par Jean-François Quessy et moi-même, propose une façon de ré-échantillonner des estimateurs non-paramétriques pour des distributions conditionnelles. Cet article a été publié dans la revue Statistics and Computing. En autres choses, nous y montrons comment obtenir des intervalles de confiance pour des statistiques s'écrivant en terme de la fonction de répartition conditionnelle. Le second article de cette thèse, co-écrit par Taoufik Bouezmarni, Jean-François Quessy et moi-même, s'affaire à étudier deux estimateurs non-paramétriques de la copule conditionnelles, proposés par Gijbels et coll. en présence de données sérielles. Cet article a été soumis dans la revue Statistics and Probability Letters. Nous identifions la distribution asymptotique de chacun de ces estimateurs pour des données mélangeantes. Le troisième article de cette thèse, co-écrit par Taoufik Bouezmarni, Jean-François Quessy et moi-même, propose une nouvelle façon d'étudier les relations de causalité entre deux séries chronologiques. Cet article a été soumis dans la revue Electronic Journal of Statistics. Dans cet article, nous utilisons la copule conditionnelle pour caractériser une version locale de la causalité au sens de Granger. Puis, nous proposons des mesures de causalité basées sur la copule conditionnelle. Le quatrième article de cette thèse, co-écrit par Taoufik Bouezmarni, Anouar El Ghouch et moi-même, propose une méthode qui permette d'estimer adéquatement la copule conditionnelle en présence de données incomplètes. Cet article a été soumis dans la revue Scandinavian Journal of Statistics. Les propriétés asymptotiques de l'estimateur proposé y sont aussi étudiées. Finalement, la dernière partie de cette thèse contient un travail inédit, qui porte sur la mise en oeuvre de tests statistiques permettant de déterminer si deux copules conditionnelles sont concordantes. En plus d'y présenter des résultats originaux, cette étude illustre l'utilité des techniques de ré-échantillonnage développées dans notre premier article
    corecore