26 research outputs found

    Fast Digital Filtering of Spectrometric Data for Pile-up Correction

    Get PDF
    International audienceThis paper considers a problem stemming from the analysis of spectrometric data. When performing experiments on highly radioactive matter, electrical pulses recorded by the spectrometer tend to overlap, thus yielding severe distortions when computing the histogram of the pulses' energies. In this paper, we propose a fast recursive algorithm which estimates efficiently this histogram from measurements of the duration and energies of overlapping pulses. Its good performances are shown both on simulations and real data. Furthermore, its lower algorithmic complexity makes it more fitting for real-time implementation

    Nonparametric posterior learning for emission tomography

    No full text
    International audienceWe continue studies of the uncertainty quantification problem in emission tomographies such aspositron emission tomography (PET) or single photon emission computed tomography (SPECT)when additional multimodal data (anatomical magnetic resonance imaging (MRI) images) are avail-able. To solve the aforementioned problem we adapt the recently proposed nonparametric posteriorlearning technique to the context of Poisson-type data in emission tomography. Using this approachwe derive sampling algorithms which are trivially parallelizable, scalable and very easy to implement.In addition, we prove conditional consistency and tightness for the distribution of produced samplesin the small noise limit (i.e., when the acquisition time tends to infinity) and derive new geometricaland necessary condition on how MRI images must be used. This condition arises naturally in thecontext of identifiability problem for misspecified generalized Poisson models with wrong design. Wealso contrast our approach with Bayesian Markov chain Monte Carlo sampling based on one dataaugmentation scheme which is very popular in the context of expectation-maximization algorithmsfor PET or SPECT. We show theoretically and also numerically that such data augmentation sig-nificantly increases mixing times for the Markov chain. In view of this, our algorithms seem to givea reasonable trade-off between design complexity, scalability, numerical load and assessment for theuncertaint

    PET reconstruction of the posterior image probability, including multimodal images

    No full text
    International audienceIn PET image reconstruction, it would be useful to obtain the entire posterior probability distribution of the image, because it allows for both estimating image intensity and assessing the uncertainty of the estimation, thus leading to more reliable interpretation. We propose a new entirely probabilistic model: the prior is a distribution over possible smooth regions (distance-driven Chinese restaurant process), and the posterior distribution is estimated using a Gibbs MCMC sampler. Data from other modalities (here one or several MR images) are introduced into the model as additional observed data, providing side information about likely smooth regions in the image. The reconstructed image is the posterior mean, and the uncertainty is presented as an image of the size of 95% posterior intervals. The reconstruction was compared to MLEM and OSEM algorithms, with and without post-smoothing, and to a penalized ML or MAP method that also uses additional images from other modalities. Qualitative and quantitative tests were performed on realistic simulated data with statistical replicates and on several clinical examinations presenting pathologies. The proposed method presents appealing properties in terms of obtained bias, variance, spatial regularization, and use of multimodal data, and produces in addition potentially valuable uncertainty information

    Flots stochastiques discrets

    No full text
    International audienceDans cet article nous étendons les modèles de mélanges de Gaussiennes en un modèle probabiliste de mélange (les flots stochastiques discrets, FSD) dont les poids sont des fonctions flexibles définies par le biais de réseau de neurones. Nous montrons que les FSD peuvent être utilisés dans deux classes de problèmes variationnels : celui de l'estimation de densité, et celui de l'inférence variationnelle. À nombre de composantes fixé, le FSD aboutit à un gain de flexibilité important, que nous illustrons par un exemple bi-dimensionnel

    Apprentissage Bayésien semi-supervisé par modélisation générative

    No full text
    International audience-Dans cet article, nous comparons les modélisations générative et discriminante sous le prisme de la quantification d'incertitude épistémique, et confrontons ensuite les deux approches à la problématique de l'apprentissage semi-supervisé. Nous expliquons que l'approche générative permet de prendre en compte des données non-labelisées dans la modélisation, ce qui est structurellement impossible avec une approche discriminante. Nous proposons enfin un algorithme d'échantillonnage permettant l'apprentissage semi-supervisé et la quantification de l'incertitude épistémique d'un modèle génératif

    Reconstruction, analysis and interpretation of posterior probability distributions of PET images, using the posterior bootstrap

    No full text
    International audienceThe uncertainty of reconstructed PET images remains difficult to assess and to interpret for the use in diagnostic and quantification tasks. Here we provide (1) an easy-to-use methodology for uncertainty assessment for almost any Bayesian model in PET reconstruction from single datasets and (2) a detailed analysis and interpretation of produced posterior image distributions. We apply a recent posterior bootstrap framework to the PET image reconstruction inverse problem and obtain simple parallelizable algorithms based on random weights and on existing maximum a posteriori (MAP) (posterior maximum) optimization-based algorithms. Posterior distributions are produced, analyzed and interpreted for several common Bayesian models. Their relationship with the distribution of the MAP image estimate over multiple dataset realizations is exposed. The coverage properties of posterior distributions are validated. More insight is obtained for the interpretation of posterior distributions in order to open the way for including uncertainty information into diagnostic and quantification tasks

    Dynamic and clinical PET data reconstruction: A nonparametric Bayesian approach

    No full text
    International audienceWe propose a nonparametric and Bayesian method for reconstructing dynamic Positron Emission Tomography (PET) images from clinical data. PET is a nuclear medicine imaging modality that uses molecules labeled with a positron emitting radionuclide. It is then possible to image in vivo molecular interactions of biological processes. Our approach is non-parametric in the sense that the image representing the 4D (3D+t) activity distribution is viewed as a probability density on R 3 × R + and inferred directly from the data, without any prior space or time discretization. Being nonparametric, we do not pre-assume any particular functional form for this space-time distribution. Formulating the nonparametric problem in the Bayesian framework allows to characterize the entire 4D distribution of the unknown. Furthermore, this framework allows to access directly to the reconstruction error. The ability of the proposed model is assessed using data from clinical studies and we evaluate its performance against the conventional independent time-frame reconstruction approach using the maximum likelihood algorithm (ML-EM)

    IGRT kV-CBCT dose calculations using Virtual Source Models and validated in phantoms using OSL

    Get PDF
    International audiencePurpose or Objective: With the growing use of X-ray imaging equipment in Image-Guided RadioTherapy (IGRT), the need to evaluate the dose-to-organs delivered by kV-CBCT imaging acquisition increases. This study aims to propose accurate Monte Carlo (MC) calculations of the patient dose-to-organs delivered by two commercially available kV-CBCT systems: the XVI from Elekta’s VERSA HD accelerator and the OBI from Varian’s TrueBeam system. Simulations are to be validated using in phantom OSL measurements.Material and Methods: For both kV-CBCT systems, the kV irradiation head geometry was implemented in the MC simulation code Penelope. As a first step, the resulting photon distributions were expressed as Virtual Source Models (VSM) for every standard irradiation condition (kVp,filtration, collimation); it was then validated and adjusted using in water-phantom measurements performed with a calibrated Farmer-type ionization chamber. In a second step, the validated VSMs were used to simulate the dose delivered by both the XVI and OBI systems in anthropomorphic phantoms, using standard clinical imaging protocols. Simulated dose-to-organs were then confronted to dose measurements performed using OSL inserted into the same phantoms, following a dosimetric protocol for OSLs previously established [1]. In addition, VSM results were confronted to their direct MC counterparts in order to evaluate the benefit of using such technique.Results: The current study highlights the possibility to reproduce OSL dose-to-organ measurements using VSM-driven Monte Carlo simulation with an overall agreement better than 20 %. In addition, the use of VSM in the MC simulation enables to speed-up the calculation time by a factor better than two (for the same statistical uncertainty) compared to direct MC simulation. Nevertheless, if direct and VSM calculations are in agreement inside the irradiation field, outside, VSM results tend to be significantly lower (10-30%).Conclusion: The use of a VSM was demonstrated to simplify and fasten MC simulations for personalized kV-CBCT MC dose estimation. In addition, OSLs enable to perform the low dose measurement in the kV range needed for in phantom X-ray imaging equipment dose QA. This study is to be completed in the near future by the addition of other standard X-ray imaging equipment dedicated to IGRT
    corecore