46,631 research outputs found

    Bayesian Estimation of White Matter Atlas from High Angular Resolution Diffusion Imaging

    Full text link
    We present a Bayesian probabilistic model to estimate the brain white matter atlas from high angular resolution diffusion imaging (HARDI) data. This model incorporates a shape prior of the white matter anatomy and the likelihood of individual observed HARDI datasets. We first assume that the atlas is generated from a known hyperatlas through a flow of diffeomorphisms and its shape prior can be constructed based on the framework of large deformation diffeomorphic metric mapping (LDDMM). LDDMM characterizes a nonlinear diffeomorphic shape space in a linear space of initial momentum uniquely determining diffeomorphic geodesic flows from the hyperatlas. Therefore, the shape prior of the HARDI atlas can be modeled using a centered Gaussian random field (GRF) model of the initial momentum. In order to construct the likelihood of observed HARDI datasets, it is necessary to study the diffeomorphic transformation of individual observations relative to the atlas and the probabilistic distribution of orientation distribution functions (ODFs). To this end, we construct the likelihood related to the transformation using the same construction as discussed for the shape prior of the atlas. The probabilistic distribution of ODFs is then constructed based on the ODF Riemannian manifold. We assume that the observed ODFs are generated by an exponential map of random tangent vectors at the deformed atlas ODF. Hence, the likelihood of the ODFs can be modeled using a GRF of their tangent vectors in the ODF Riemannian manifold. We solve for the maximum a posteriori using the Expectation-Maximization algorithm and derive the corresponding update equations. Finally, we illustrate the HARDI atlas constructed based on a Chinese aging cohort of 94 adults and compare it with that generated by averaging the coefficients of spherical harmonics of the ODF across subjects

    Phenomenological model of diffuse global and regional atrophy using finite-element methods

    Get PDF
    The main goal of this work is the generation of ground-truth data for the validation of atrophy measurement techniques, commonly used in the study of neurodegenerative diseases such as dementia. Several techniques have been used to measure atrophy in cross-sectional and longitudinal studies, but it is extremely difficult to compare their performance since they have been applied to different patient populations. Furthermore, assessment of performance based on phantom measurements or simple scaled images overestimates these techniques' ability to capture the complexity of neurodegeneration of the human brain. We propose a method for atrophy simulation in structural magnetic resonance (MR) images based on finite-element methods. The method produces cohorts of brain images with known change that is physically and clinically plausible, providing data for objective evaluation of atrophy measurement techniques. Atrophy is simulated in different tissue compartments or in different neuroanatomical structures with a phenomenological model. This model of diffuse global and regional atrophy is based on volumetric measurements such as the brain or the hippocampus, from patients with known disease and guided by clinical knowledge of the relative pathological involvement of regions and tissues. The consequent biomechanical readjustment of structures is modelled using conventional physics-based techniques based on biomechanical tissue properties and simulating plausible tissue deformations with finite-element methods. A thermoelastic model of tissue deformation is employed, controlling the rate of progression of atrophy by means of a set of thermal coefficients, each one corresponding to a different type of tissue. Tissue characterization is performed by means of the meshing of a labelled brain atlas, creating a reference volumetric mesh that will be introduced to a finite-element solver to create the simulated deformations. Preliminary work on the simulation of acquisition artefa- - cts is also presented. Cross-sectional and

    The BSM-AI project: SUSY-AI - Generalizing LHC limits on Supersymmetry with Machine Learning

    Get PDF
    A key research question at the Large Hadron Collider (LHC) is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: It requires the time consuming generation of scattering events, the simulation of the detector response, the event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiment. In the BSM-AI project we attack this challenge with a new approach. Machine learning tools are thought to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300,000 pMSSM model sets - each tested with 200 signal regions by ATLAS - have been used to train and validate SUSY-AI. The code is currently able to reproduce the ATLAS exclusion regions in 19 dimensions with an accuracy of at least 93 percent. It has been validated further within the constrained MSSM and a minimal natural supersymmetric model, again showing high accuracy. SUSY-AI and its future BSM derivatives will help to solve the problem of recasting LHC results for any model of new physics. SUSY-AI can be downloaded at http://susyai.hepforge.org/. An on-line interface to the program for quick testing purposes can be found at http://www.susy-ai.org/

    Constraining Supersymmetry using the relic density and the Higgs boson

    Full text link
    Recent measurements by Planck, LHC experiments, and Xenon100 have significant impact on supersymmetric models and their parameters. We first illustrate the constraints in the mSUGRA plane and then perform a detailed analysis of the general MSSM with 13 free parameters. Using SFitter, Bayesian and Profile Likelihood approaches are applied and their results compared. The allowed structures in the parameter spaces are largely defined by different mechanisms of dark matter annihilation in combination with the light Higgs mass prediction. In mSUGRA the pseudoscalar Higgs funnel and stau co-annihilation processes are still avoiding experimental pressure. In the MSSM stau co-annihilation, the light Higgs funnel, a mixed bino--higgsino region including the heavy Higgs funnel, and a large higgsino region predict the correct relic density. Volume effects and changes in the model parameters impact the extracted mSUGRA and MSSM parameter regions in the Bayesian analysis

    Lessons and Prospects from the pMSSM after LHC Run I: Neutralino LSP

    Full text link
    We study SUSY signatures at the 7, 8 and 14 TeV LHC employing the 19-parameter, R-Parity conserving p(henomenological)MSSM, in the scenario with a neutralino LSP. Our results were obtained via a fast Monte Carlo simulation of the ATLAS SUSY analysis suite. The flexibility of this framework allows us to study a wide variety of SUSY phenomena simultaneously and to probe for weak spots in existing SUSY search analyses. We determine the ranges of the sparticle masses that are either disfavored or allowed after the searches with the 7 and 8 TeV data sets are combined. We find that natural SUSY models with light squarks and gluinos remain viable. We extrapolate to 14 TeV with both 300 fb−1^{-1} and 3 ab−1^{-1} of integrated luminosity and determine the expected sensitivity of the jets + MET and stop searches to the pMSSM parameter space. We find that the high-luminosity LHC will be powerful in probing SUSY with neutralino LSPs and can provide a more definitive statement on the existence of natural Supersymmetry.Comment: 41 pages, 27 figures. arXiv admin note: substantial text overlap with arXiv:1307.844

    Investigating Multiple Solutions in the Constrained Minimal Supersymmetric Standard Model

    Full text link
    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.Comment: 30 pages, 12 figures, 2 tables; v2: added discussion on speed of shooting method, fixed typos, matches published versio
    • 

    corecore