45 research outputs found

    Gibbs sampling methods for Pitman-Yor mixture models

    Get PDF
    We introduce a new sampling strategy for the two-parameter Poisson-Dirichlet process mixture model, also known as Pitman-Yor process mixture model (PYM). Our sampler is therefore applicable to the well-known Dirichlet process mixture model (DPM). Inference in DPM and PYM is usually performed via Markov Chain Monte Carlo (MCMC) methods, specifi cally the Gibbs sampler. These sampling methods are usually divided in two classes: marginal and conditional algorithms. Each method has its merits and limitations. The aim of this paper is to propose a new sampler that combines the main advantages of each class. The key idea of the proposed sampler consists in replacing the standard posterior updating of the mixing measure based on the stick-breaking representation, with a posterior updating of Pitman(1996) which represents the posterior law under a Pitman-Yor process as the sum of a jump part and a continuous one. We sample the continuous part in two ways, leading to two variants of the proposed sampler. We also propose a threshold to improve mixing in the first variant of our algorithm. The two variants of our sampler are compared with a marginal method, that is the celebrated Algorithm 8 of Neal(2000), and two conditional algorithms based on the stick-breaking representation, namely the efficient slice sampler of Kalli et al. (2011) and the truncated blocked Gibbs sampler of Ishwaran and James (2001). We also investigate e ffects of removing the proposed threshold in the first variant of our algorithm and introducing the threshold in the efficient slice sampler of Kalli et al. (2011). Results on real and simulated data sets illustrate that our algorithms outperform the other conditionals in terms of mixing properties

    Impact of day of delivery on obstetric and perinatal outcome: a 10 years retrospective descriptive and analytical study at the Phillipe Maguilen Senghor Health Centre, Dakar, Senegal

    Get PDF
    Background: For a lot of women, childbirth is still a feared moment. Despite considerable progress in the management of childbirth and its complications, maternal and neonatal morbidity and mortality are still a major problem even in developed countries. To evaluate the influence of day of delivery on obstetrical and perinatal outcome.Methods: Retrospective cohort study conducted at the Philippe Maguilen SENGHOR health center maternity ward from January 1, 2011 to June 30, 2019, on patients with a pregnancy of more than 22 weeks of amenorrhea who were received for delivery management. The deliveries periods were divided according to whether they occurred on a working day (deliveries from Monday to Friday, excluding public holidays) or on weekends and public holidays (deliveries on Saturdays, Sundays and days declared as public holidays according to the Gregorian and Senegalese event calendars). The data were extracted from our E-perinatal database and analysed in the Statistical Package for Social Science (SPSS 24, Mac version).Results: Over 102 months, we recorded 42 870 deliveries. The average age of the patients was 27 years with extremes of 13 and 50 years. Nearly one in three deliveries took place on a holiday or weekend (n=13566-31.6%). The rate of caesarean delivery on weekends/holidays (18.8%) was lower than that on weekdays (21%). The odds ratio of having/benefiting from a weekend/holiday caesarean section was 0.87 (CI 0.83-0.92, p<0.0001). Our results suggest that patients who deliver on weekdays are more likely to receive a caesarean section than those who deliver on weekends or holidays. Perineal injury, World Health Organization obstetric complications, and neonatal outcome showed no significant difference by day of delivery.Conclusions: Our results contradict the idea that deliveries on weekends and holidays are more risky for patients and their children

    Modélisation stochastique de processus pharmaco-cinétiques, application à la reconstruction tomographique par émission de positrons (TEP) spatio-temporelle

    Get PDF
    The aim of this work is to develop new statistical methods for spatial (3D) and space-time (3D+t) Positron Emission Tomography (PET) reconstruction. The objective is to propose efficient reconstruction methods in a context of low injected doses while maintaining the quality of the interpretation. We tackle the reconstruction problem as a spatial or a space-time inverse problem for point observations in a \Bayesian nonparametric framework. The Bayesian modeling allows to regularize the ill-posed inverse problem via the introduction of a prior information. Furthermore, by characterizing the unknowns with their posterior distributions, the Bayesian context allows to handle the uncertainty associated to the reconstruction process. Being nonparametric offers a framework for robustness and flexibility to perform the modeling. In the proposed methodology, we view the image to reconstruct as a probability density in(for reconstruction in k dimensions) and seek the solution in the space of whole probability densities in . However, due to the size of the data, posterior estimators are intractable and approximation techniques are needed for posterior inference. Most of these techniques are based on Markov Chain Monte-Carlo methods (MCMC). In the Bayesian nonparametric approach, a major difficulty raises in randomly sampling infinite dimensional objects in a computer. We have developed a new sampling method which combines both good mixing properties and the possibility to be implemented on a parallel computer in order to deal with large data sets. Thanks to the taken approach, we obtain 3D spatial reconstructions without any ad hoc space voxellization and 4D space-time reconstructions without any discretization, neither in space nor in time. Furthermore, one can quantify the error associated to the statistical estimation using the credibility intervals.L'objectif de ce travail est de développer de nouvelles méthodes statistiques de reconstruction d'image spatiale (3D) et spatio-temporelle (3D+t) en Tomographie par Émission de Positons (TEP). Le but est de proposer des méthodes efficaces, capables de reconstruire des images dans un contexte de faibles doses injectées tout en préservant la qualité de l'interprétation. Ainsi, nous avons abordé la reconstruction sous la forme d'un problème inverse spatial et spatio-temporel (à observations ponctuelles) dans un cadre bayésien non paramétrique. La modélisation bayésienne fournit un cadre pour la régularisation du problème inverse mal posé au travers de l'introduction d'une information dite a priori. De plus, elle caractérise les grandeurs à estimer par leur distribution a posteriori, ce qui rend accessible la distribution de l'incertitude associée à la reconstruction. L'approche non paramétrique quant à elle pourvoit la modélisation d'une grande robustesse et d'une grande flexibilité. Notre méthodologie consiste à considérer l'image comme une densité de probabilité dans (pour une reconstruction en k dimensions) et à chercher la solution parmi l'ensemble des densités de probabilité de . La grande dimensionalité des données à manipuler conduit à des estimateurs n'ayant pas de forme explicite. Cela implique l'utilisation de techniques d'approximation pour l'inférence. La plupart de ces techniques sont basées sur les méthodes de Monte-Carlo par chaînes de Markov (MCMC). Dans l'approche bayésienne non paramétrique, nous sommes confrontés à la difficulté majeure de générer aléatoirement des objets de dimension infinie sur un calculateur. Nous avons donc développé une nouvelle méthode d'échantillonnage qui allie à la fois bonnes capacités de mélange et possibilité d'être parallélisé afin de traiter de gros volumes de données. L'approche adoptée nous a permis d'obtenir des reconstructions spatiales 3D sans nécessiter de voxellisation de l'espace, et des reconstructions spatio-temporelles 4D sans discrétisation en amont ni dans l'espace ni dans le temps. De plus, on peut quantifier l'erreur associée à l'estimation statistique au travers des intervalles de crédibilité

    Stochastic modeling of pharmaco-kinetic processes, applied to PET space-time reconstruction

    Full text link
    L'objectif de ce travail est de développer de nouvelles méthodes statistiques de reconstruction d'image spatiale (3D) et spatio-temporelle (3D+t) en Tomographie par Émission de Positons (TEP). Le but est de proposer des méthodes efficaces, capables de reconstruire des images dans un contexte de faibles doses injectées tout en préservant la qualité de l'interprétation. Ainsi, nous avons abordé la reconstruction sous la forme d'un problème inverse spatial et spatio-temporel (à observations ponctuelles) dans un cadre bayésien non paramétrique. La modélisation bayésienne fournit un cadre pour la régularisation du problème inverse mal posé au travers de l'introduction d'une information dite a priori. De plus, elle caractérise les grandeurs à estimer par leur distribution a posteriori, ce qui rend accessible la distribution de l'incertitude associée à la reconstruction. L'approche non paramétrique quant à elle pourvoit la modélisation d'une grande robustesse et d'une grande flexibilité. Notre méthodologie consiste à considérer l'image comme une densité de probabilité dans (pour une reconstruction en k dimensions) et à chercher la solution parmi l'ensemble des densités de probabilité de . La grande dimensionalité des données à manipuler conduit à des estimateurs n'ayant pas de forme explicite. Cela implique l'utilisation de techniques d'approximation pour l'inférence. La plupart de ces techniques sont basées sur les méthodes de Monte-Carlo par chaînes de Markov (MCMC). Dans l'approche bayésienne non paramétrique, nous sommes confrontés à la difficulté majeure de générer aléatoirement des objets de dimension infinie sur un calculateur. Nous avons donc développé une nouvelle méthode d'échantillonnage qui allie à la fois bonnes capacités de mélange et possibilité d'être parallélisé afin de traiter de gros volumes de données. L'approche adoptée nous a permis d'obtenir des reconstructions spatiales 3D sans nécessiter de voxellisation de l'espace, et des reconstructions spatio-temporelles 4D sans discrétisation en amont ni dans l'espace ni dans le temps. De plus, on peut quantifier l'erreur associée à l'estimation statistique au travers des intervalles de crédibilité.The aim of this work is to develop new statistical methods for spatial (3D) and space-time (3D+t) Positron Emission Tomography (PET) reconstruction. The objective is to propose efficient reconstruction methods in a context of low injected doses while maintaining the quality of the interpretation. We tackle the reconstruction problem as a spatial or a space-time inverse problem for point observations in a \Bayesian nonparametric framework. The Bayesian modeling allows to regularize the ill-posed inverse problem via the introduction of a prior information. Furthermore, by characterizing the unknowns with their posterior distributions, the Bayesian context allows to handle the uncertainty associated to the reconstruction process. Being nonparametric offers a framework for robustness and flexibility to perform the modeling. In the proposed methodology, we view the image to reconstruct as a probability density in(for reconstruction in k dimensions) and seek the solution in the space of whole probability densities in . However, due to the size of the data, posterior estimators are intractable and approximation techniques are needed for posterior inference. Most of these techniques are based on Markov Chain Monte-Carlo methods (MCMC). In the Bayesian nonparametric approach, a major difficulty raises in randomly sampling infinite dimensional objects in a computer. We have developed a new sampling method which combines both good mixing properties and the possibility to be implemented on a parallel computer in order to deal with large data sets. Thanks to the taken approach, we obtain 3D spatial reconstructions without any ad hoc space voxellization and 4D space-time reconstructions without any discretization, neither in space nor in time. Furthermore, one can quantify the error associated to the statistical estimation using the credibility intervals

    QUANTIFYING UNCERTAINTY IN KNEE OSTEOARTHRITIS DIAGNOSIS

    Full text link
    International audienceKnee OsteoArthritis (OA) is one of the most common causes of physical disability in the world, causing a large personal and socioeconomic burden. Visual assessment of OA still suffers from subjectivity. Deep learning (DL), and in particular convolutional neural networks (CNN), has recently led to remarkable improvements in knee OA detection. However, traditional deep learning-based knee OA classification algorithms lack the ability to quantify decision uncertainty. This is a key point in the medical field where, due to the high cost of labelling, we are faced with a lack of sufficient data to train a learning model. We propose here an alternative approach based on the the concept of Evidential Deep Learning (EDL). Unlike Bayesian neural networks which indirectly infer prediction uncertainty through uncertainties in the network weights, EDL approaches explicitly model this uncertainty using the theory of subjective logic. Experimental results on the Osteoarthritis (OAI) database demonstrate the potential of the proposed approach

    A Simple and Efficient Method for Sampling Mixture Models based on Dirichlet and Pitman-Yor processes

    Full text link
    We introduce a simple and efficient sampling strategy for the Dirichlet Process Mixture model (DPM) and its two-parameter extension, the Poisson-Dirichlet process mixture model, also known as the Pitman-Yor process Mixture model (PYM). Inference in DPM and PYM is usually performed using Markov Chain Monte Carlo (MCMC) methods, specifically the Gibbs sampler. These sampling methods are usually divided into two classes: marginal and conditional algorithms. Each method has its own merits and limitations. The aim of this paper is to propose a simple and effective strategy that combines the main advantages of each class. Extensive experiments on simulated and real data highlight that the proposed sampler is relevant and performs much better than its competitors
    corecore