2,459 research outputs found

    Semiparametric Bayesian models for human brain mapping

    Get PDF
    Functional magnetic resonance imaging (fMRI) has led to enormous progress in human brain mapping. Adequate analysis of the massive spatiotemporal data sets generated by this imaging technique, combining parametric and non-parametric components, imposes challenging problems in statistical modelling. Complex hierarchical Bayesian models in combination with computer-intensive Markov chain Monte Carlo inference are promising tools.The purpose of this paper is twofold. First, it provides a review of general semiparametric Bayesian models for the analysis of fMRI data. Most approaches focus on important but separate temporal or spatial aspects of the overall problem, or they proceed by stepwise procedures. Therefore, as a second aim, we suggest a complete spatiotemporal model for analysing fMRI data within a unified semiparametric Bayesian framework. An application to data from a visual stimulation experiment illustrates our approach and demonstrates its computational feasibility

    Data augmentation in Rician noise model and Bayesian Diffusion Tensor Imaging

    Full text link
    Mapping white matter tracts is an essential step towards understanding brain function. Diffusion Magnetic Resonance Imaging (dMRI) is the only noninvasive technique which can detect in vivo anisotropies in the 3-dimensional diffusion of water molecules, which correspond to nervous fibers in the living brain. In this process, spectral data from the displacement distribution of water molecules is collected by a magnetic resonance scanner. From the statistical point of view, inverting the Fourier transform from such sparse and noisy spectral measurements leads to a non-linear regression problem. Diffusion tensor imaging (DTI) is the simplest modeling approach postulating a Gaussian displacement distribution at each volume element (voxel). Typically the inference is based on a linearized log-normal regression model that can fit the spectral data at low frequencies. However such approximation fails to fit the high frequency measurements which contain information about the details of the displacement distribution but have a low signal to noise ratio. In this paper, we directly work with the Rice noise model and cover the full range of bb-values. Using data augmentation to represent the likelihood, we reduce the non-linear regression problem to the framework of generalized linear models. Then we construct a Bayesian hierarchical model in order to perform simultaneously estimation and regularization of the tensor field. Finally the Bayesian paradigm is implemented by using Markov chain Monte Carlo.Comment: 37 pages, 3 figure

    Impact on the tensor-to-scalar ratio of incorrect Galactic foreground modelling

    Full text link
    A key goal of many Cosmic Microwave Background experiments is the detection of gravitational waves, through their B-mode polarization signal at large scales. To extract such a signal requires modelling contamination from the Galaxy. Using the Planck experiment as an example, we investigate the impact of incorrectly modelling foregrounds on estimates of the polarized CMB, quantified by the bias in tensor-to-scalar ratio r, and optical depth tau. We use a Bayesian parameter estimation method to estimate the CMB, synchrotron, and thermal dust components from simulated observations spanning 30-353 GHz, starting from a model that fits the simulated data, returning r<0.03 at 95% confidence for an r=0 model, and r=0.09+-0.03 for an r=0.1 model. We then introduce a set of mismatches between the simulated data and assumed model. Including a curvature of the synchrotron spectral index with frequency, but assuming a power-law model, can bias r high by ~1-sigma (delta r ~ 0.03). A similar bias is seen for thermal dust with a modified black-body frequency dependence, incorrectly modelled as a power-law. If too much freedom is allowed in the model, for example fitting for spectral indices in 3 degree pixels over the sky with physically reasonable priors, we find r can be biased up to ~3-sigma high by effectively setting the indices to the wrong values. Increasing the signal-to-noise ratio by reducing parameters, or adding additional foreground data, reduces the bias. We also find that neglecting a 1% polarized free-free or spinning dust component has a negligible effect on r. These tests highlight the importance of modelling the foregrounds in a way that allows for sufficient complexity, while minimizing the number of free parameters.Comment: 11 pages, 7 figures, submitted to MNRA

    Bayesian uncertainty quantification in linear models for diffusion MRI

    Full text link
    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification.Comment: Added results from a group analysis and a comparison with residual bootstra

    Estimation of white matter fiber parameters from compressed multiresolution diffusion MRI using sparse Bayesian learning

    Get PDF
    We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates

    Estimation of Fiber Orientations Using Neighborhood Information

    Full text link
    Data from diffusion magnetic resonance imaging (dMRI) can be used to reconstruct fiber tracts, for example, in muscle and white matter. Estimation of fiber orientations (FOs) is a crucial step in the reconstruction process and these estimates can be corrupted by noise. In this paper, a new method called Fiber Orientation Reconstruction using Neighborhood Information (FORNI) is described and shown to reduce the effects of noise and improve FO estimation performance by incorporating spatial consistency. FORNI uses a fixed tensor basis to model the diffusion weighted signals, which has the advantage of providing an explicit relationship between the basis vectors and the FOs. FO spatial coherence is encouraged using weighted l1-norm regularization terms, which contain the interaction of directional information between neighbor voxels. Data fidelity is encouraged using a squared error between the observed and reconstructed diffusion weighted signals. After appropriate weighting of these competing objectives, the resulting objective function is minimized using a block coordinate descent algorithm, and a straightforward parallelization strategy is used to speed up processing. Experiments were performed on a digital crossing phantom, ex vivo tongue dMRI data, and in vivo brain dMRI data for both qualitative and quantitative evaluation. The results demonstrate that FORNI improves the quality of FO estimation over other state of the art algorithms.Comment: Journal paper accepted in Medical Image Analysis. 35 pages and 16 figure
    corecore