16 research outputs found

    Compendio de métodos para caracterizar la geometría de los tejidos cerebrales a partir de imágenes de resonancia magnética por difusión del agua.

    Get PDF
    221 p.FIDMAG Hermanas Hospitalarias Research Foundation; CIBERSAM:Centro de Investigación Biomédica en Re

    Contributions to MCMC Methods in Constrained Domains with Applications to Neuroimaging

    Full text link
    Markov chain Monte Carlo (MCMC) methods form a rich class of computational techniques that help its user ascertain samples from target distributions when direct sampling is not possible or when their closed forms are intractable. Over the years, MCMC methods have been used in innumerable situations due to their flexibility and generalizability, even in situations involving nonlinear and/or highly parametrized models. In this dissertation, two major works relating to MCMC methods are presented. The first involves the development of a method to identify the number and directions of nerve fibers using diffusion-weighted MRI measurements. For this, the biological problem is first formulated as a model selection and estimation problem. Using the framework of reversible jump MCMC, a novel Bayesian scheme that performs both the above tasks simultaneously using customizable priors and proposal distributions is proposed. The proposed method allows users to set a prior level of spatial separation between the nerve fibers, allowing more crossing paths to be detected when desired or a lower number to potentially only detect robust nerve tracts. Hence, estimation that is specific to a given region of interest within the brain can be performed. In simulated examples, the method has been shown to resolve up to four fibers even in instances of highly noisy data. Comparative analysis with other state-of-the-art methods on in-vivo data showed the method\u27s ability to detect more crossing nerve fibers. The second work involves the construction of an MCMC algorithm that efficiently performs (Bayesian) sampling of parameters with support constraints. The method works by embedding a transformation called inversion in a sphere within the Metropolis-Hastings sampler. This creates an image of the constrained support that is amenable to sampling using standard proposals such as Gaussian. The proposed strategy is tested on three domains: the standard simplex, a sector of an n-sphere, and hypercubes. In each domain, a comparison is made with existing sampling techniques

    Bayesian Processing for the Detection of Radioactive Contraband from Uncertain Measurements

    Get PDF
    With the increase in terrorist activities throughout the world, the need to develop techniques capable of detecting radioactive contraband in a timely manner is a critical requirement. The development of Bayesian processors for the detection of contraband stems from the fact that the posterior distribution is clearly multimodal eliminating the usual Gaussian-based processors. The development of a sequential bootstrap processor for this problem is discussed and shown how it is capable of providing an enhanced signal for eventual detection

    Application of the Monte Carlo method to analyze materials used in flat panel detectors to obtain X-ray spectra

    Full text link
    An accurate knowledge of the photon spectra emitted by X-ray tubes in radiodiagnostic is essential to better estimate the imparted dose to patients and to improve the quality image obtained with these devices. In this work, it is proposed the use of a flat panel detector together with a PMMA wedge to estimate the actual X-ray spectrum using the Monte Carlo method and unfolding techniques. The MCNP5 code has been used to model different flat panels (based on indirect and direct methods to produce charge carriers from absorbed X-rays) and to obtain the dose curves and system response functions. Most of the actual flat panel devices use scintillator materials that present K-edge discontinuities in the mass energy-absorption coefficient, which strongly affect the response matrix. In this paper, the applicability of different flat panels for reconstructing X-ray spectra is studied. The effect of the mass energyabsorption coefficient of the scintillator material has been studied on the response matrix and consequently, in the reconstructed spectra. Different unfolding methods are tested to reconstruct the actual X-ray spectrum knowing the dose curve and the response function. It has been concluded that the regularization method Modified Truncated Singular Value Decomposition (MTSVD) is appropriate to unfold X-ray spectra in all the scintillators studied.Gallardo Bermell, S.; Pozuelo, F.; Querol Vives, A.; Ródenas Diago, J.; Verdú Martín, GJ. (2015). Application of the Monte Carlo method to analyze materials used in flat panel detectors to obtain X-ray spectra. Annals of Nuclear Energy. 82:240-251. doi:10.1016/j.anucene.2014.08.065S2402518

    A Bayesian approach to understanding multiple mineral phase age spectra and a reassessment of the 40K decay modes

    Get PDF
    The application of 40Ar/39Ar geochronology is extensive. Geological ages have been determine for events spanning from the Early Solar system to recent historical volcanic eruptions. The ubiquity of the parent isotope 40K means this method is also applicable across all of the terrestrial planets and their satellites. However, the technique has limitations. The aim of this thesis is to address two limitations; (1) Is it possible to obtain meaningful ages from multiple grain assemblages that contain multiple ages and phases? and (2) Does the 40K decay constant need revisiting? To address this first question I develop a Bayesian modelling framework to fit both the age spectrum and cumulative release of 39Ar, the parameters that best describe the data (e.g., the age, diffusion kinetics, and mixture weights) are inferred. Chapter 2 sets out this modelling framework and shows that for simple mixtures with highly informative priors it is possible to infer geologically meaningful information. Despite this I make a number of limiting assumptions that are address in Chapter 3. Chapter 3 shows an extension of this method to a non-parametric framework where the number of distinct age components is also treated as an unknown. By modelling a range of mixtures I show that is it possible to get ages of geological meaning with the limited prior information available in nature. The final part of this research question involves establishing the sources of the glacial flood sediments from Moses Coulee and Ephrata fan. These sediments are linked to the catastrophic flooding of fresh water across the Columbia Basin, down the Columbia River and out-washing to the Pacific Ocean during the last ice-age. During the period from ~22000 to ~13000 years ago, there were potentially hundreds of such flooding events in the region. I use the non-parametric Bayesian model to estimate components age of both sediments to link back to known source lithologies. The determination of Mesoprotezeroic age components in both sam- ples requires that glacial Lake Missoula is a source of both of these samples. These data and model output may also suggest that the Moses Coulee formed before the Grand Coulee or that the glacial megafloods in the Pacific Northwest were larger than previous estimations. The second concern address the 40K decay constant. In this section I create a theoretical argument based on energy conservation to illicit the inclusion of an electron capture (EC) to ground state decay mode. This decay mode has been ignored in previous studies due to a lack of experimental verification. I used the theory of β decay to estimate the strength of this decay mode and show that by the inclusion of the decay K-Ar ages (≤1 Ga) may shift by 2% to be younger than previously estimated

    Uncertainty Quantification in Emission Quantitative Imaging

    Full text link
    Imaging detectors have potential to improve the reliability of plutonium holdup measurements. Holdup measurement is a significant challenge for nuclear safeguards and criticality safety. To infer holdup mass today, inspectors must combine data from counting (non-imaging) detectors with spatial measurements, process knowledge, and survey estimates. This process results in limited certainty about the holdup mass. Imaging detectors provide more information about the spatial distribution of the source, increasing certainty. In this dissertation we focus on the emission quantitative imaging problem using a fast-neutron coded aperture detector. We seek a reliable way to infer the total intensity of a neutron source with an unknown spatial distribution. The source intensity can be combined with other measurements to infer the holdup mass. To do this we first create and validate a model of the imager. This model solves the forward problem of estimating data given a known source distribution. We use cross-validation to show that the model reliably predicts new measurements (with predictable residuals). We then demonstrate a non-Bayesian approach to process new imager data. The approach solves the inverse problem of inferring source intensity, given various sources of information (imager data, physical constraints) and uncertainty (measurement noise, modeling error, absence of information, etc). Bayesian approaches are also considered, but preliminary findings indicate the need for advanced Markov chain algorithms beyond the scope of this dissertation. The non-Bayesian results reliably provide confidence intervals for medium-scale problems, as demonstrated using a blind-inspector measurement. However, the confidence interval is quite large, due chiefly to modeling error.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136929/1/ambevill_1.pd

    Échantillonnage stochastique efficace par modèle Bernoulli mélange de Gaussiennes pour la résolution des problèmes inverses parcimonieux

    Get PDF
    Cette thèse aborde la résolution des problèmes inverses parcimonieux quand les données observées peuvent être considérées comme une combinaison linéaire d'un faible nombre d'éléments dits « atomes » (e.g., impulsions, réponse instrumentale décalée ou sinusoïdes). Ces problèmes sont rencontrés dans différents domaines d'application, allant du contrôle non destructif ultrasonore, à la spectroscopie et l'analyse spectrale. Dans le contexte bayésien, ces problèmes peuvent être abordés en considérant des modèles a priori sur les paramètres d'intérêts, prenant en compte la parcimonie de façon explicite via l'introduction de variables binaires (e.g., modèle Bernoulli-Gaussien). L'estimation des paramètres se fait ensuite en calculant un estimateur de type espérance a posteriori à partir d'échantillons générés par des méthodes Monte-Carlo par chaînes de Markov (MCMC). L'avantage majeur des méthodes MCMC dans le contexte bayésien, par rapport aux approches d'optimisation déterministes, est la possibilité d'intégrer sans grande difficulté l'estimation des hyper-paramètres du modèle (e.g., la variance du bruit d'observation) ainsi que de se placer dans un cadre semi-aveugle ou aveugle, c'est-à-dire des cas où les atomes sont partiellement ou totalement inconnus. Cependant, ces méthodes MCMC sont généralement très coûteuses en temps de calcul et nécessitent d'être manipulées avec soin afin de garantir leur convergence. Des approches d'échantillonnage efficace s'appuyant sur le Partially Collapsed Gibbs Sampler (PCGS) ont été développées dans le cas du modèle Bernoulli-Gaussien. Cependant, elles ne peuvent pas être utilisées dès que l'on souhaite considérer d'autres a priori parcimonieux, avec des lois à longues queues (e.g., Bernoulli-Laplace) qui sont préférables à la Gaussienne car elles induisent une moindre régularisation ; ou avec des lois à support réduit (e.g., Bernoulli-Exponentiel) afin de garantir une contrainte de non-négativité. On est alors restreint à l'utilisation des méthodes MCMC classiques coûteuses en temps de calcul. L'objectif de cette thèse est de réconcilier l'échantillonnage PCGS avec des modèles prenant en compte la parcimonie de façon explicite autres que le modèle Bernoulli-Gaussien. La principale contribution est l'introduction et l'étude d'un nouveau modèle a priori dit « Bernoulli Mélange de Gaussiennes » (BMG). Ce dernier repose sur les lois de mélange continu de Gaussiennes et permet l'amélioration des propriétés de convergence des méthodes MCMC grâce à une implémentation numérique efficace d'algorithmes PCGS. D'autre part, le modèle est présenté dans un cadre général, permettant de prendre en compte, de manière systématique, de nombreuses lois de probabilité. Pour ces travaux, nous avons exploité des lois de probabilité de la famille LSMG (Location and Scale Mixture of Gaussians), peu étudiée dans la littérature, que nous avons caractérisées plus précisément. Une deuxième contribution majeure consiste à étendre le champ d'application du modèle BMG aux lois de probabilité à support réduit. Ainsi, nous avons proposé une nouvelle approche d'approximation de lois de probabilité dénommée « asymptotically Exact Location-Scale Approximations » (ELSA) pour laquelle nous avons montré le bon comportement, à la fois en théorie et en pratique et avons montré empiriquement son efficacité par rapport aux approches de l'état de l'art. Enfin, l'efficacité du nouveau modèle BMG, de son échantillonneur PCGS et des approximations ELSA est étudiée et validée dans le cadre des problèmes inverses parcimonieux sur un exemple de déconvolution de train d'impulsions.This thesis deals with sparse inverse problems when the observed data can be considered as a linear combination of a small number of elements called « atoms » (e.g., pulses, shifted instrumental response or sinusoids). These problems are encountered in various domains, ranging from ultrasonic non-destructive testing to spectroscopy and spectral analysis. In the Bayesian framework, these problems can be addressed by considering a priori models on the parameters of interest that take into account the sparsity explicitly via the introduction of binary variables (e.g., Bernoulli-Gaussian model). The estimation of the parameters is done by computing the posterior mean estimator from samples generated by Markov chain Monte Carlo (MCMC) methods. The major advantage of MCMC methods in the Bayesian framework, compared to deterministic optimization approaches, is the possibility of integrating without much difficulty the estimation of the hyper-parameters of the model (e.g., the variance of the observation noise) as well as considering semi-blind or blind settings, i.e., cases where the atoms are partially or totally unknown. However, MCMC methods are generally computationally expensive and need to be handled carefully to ensure their convergence. An efficient sampling approaches based on the Partially Collapsed Gibbs Sampler (PCGS) have been developed for the Bernoulli-Gaussian model. However, it cannot be used with other sparse enforcing priors, such as priors with long-tailed distributions (e.g., Bernoulli-Laplace) which are preferable to the Gaussian because they induce less regularization; or with distributions supported in a bonded interval (e.g., Bernoulli-Exponential) in order to guarantee a non-negativity constraint. As a result one is restricted to the computationally expensive classical MCMC methods. The objective of this thesis is to reconcile PCGS sampling with models that explicitly take into account sparsity other than the Bernoulli-Gaussian model. The main contribution is the introduction and study of a new prior model called « Bernoulli Mixture of Gaussians » (BMG). The latter, based on continuous Gaussian mixtures improves the convergence properties of MCMC methods thanks to an efficient numerical implementation of PCGS algorithms. On the other hand, the model is presented in a general framework, allowing to take into account, in a systematic way, a rich family of probability distributions. More precisely, the BMG relies on the LSMG (Location and Scale Mixture of Gaussians) family, which we have studied and characterized. The second major contribution consists in extending the field of application of the BMG model to probability distributions supported on a bounded interval. Thus, we have proposed a new approach to approximate probability distributions called « asymptotically Exact Location-Scale Approximations » (ELSA) for which we have shown good behavior, both in theory and in practice and empirically validate its efficiency compared to state-of-the-art approaches. Finally, the efficiency of the BMG model, its PCGS sampler and ELSA approximations is studied and validated in the context of sparse inverse problems on an example of spike train deconvolution

    Background-Source separation in astronomical images with Bayesian Probability Theory

    Get PDF
    corecore