1,369 research outputs found

    Data augmentation in Rician noise model and Bayesian Diffusion Tensor Imaging

    Full text link
    Mapping white matter tracts is an essential step towards understanding brain function. Diffusion Magnetic Resonance Imaging (dMRI) is the only noninvasive technique which can detect in vivo anisotropies in the 3-dimensional diffusion of water molecules, which correspond to nervous fibers in the living brain. In this process, spectral data from the displacement distribution of water molecules is collected by a magnetic resonance scanner. From the statistical point of view, inverting the Fourier transform from such sparse and noisy spectral measurements leads to a non-linear regression problem. Diffusion tensor imaging (DTI) is the simplest modeling approach postulating a Gaussian displacement distribution at each volume element (voxel). Typically the inference is based on a linearized log-normal regression model that can fit the spectral data at low frequencies. However such approximation fails to fit the high frequency measurements which contain information about the details of the displacement distribution but have a low signal to noise ratio. In this paper, we directly work with the Rice noise model and cover the full range of bb-values. Using data augmentation to represent the likelihood, we reduce the non-linear regression problem to the framework of generalized linear models. Then we construct a Bayesian hierarchical model in order to perform simultaneously estimation and regularization of the tensor field. Finally the Bayesian paradigm is implemented by using Markov chain Monte Carlo.Comment: 37 pages, 3 figure

    Tensor Regression with Applications in Neuroimaging Data Analysis

    Get PDF
    Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data.Comment: 27 pages, 4 figure

    Investigating microstructural variation in the human hippocampus using non-negative matrix factorization

    No full text
    In this work we use non-negative matrix factorization to identify patterns of microstructural variance in the human hippocampus. We utilize high-resolution structural and diffusion magnetic resonance imaging data from the Human Connectome Project to query hippocampus microstructure on a multivariate, voxelwise basis. Application of non-negative matrix factorization identifies spatial components (clusters of voxels sharing similar covariance patterns), as well as subject weightings (individual variance across hippocampus microstructure). By assessing the stability of spatial components as well as the accuracy of factorization, we identified 4 distinct microstructural components. Furthermore, we quantified the benefit of using multiple microstructural metrics by demonstrating that using three microstructural metrics (T1-weighted/T2-weighted signal, mean diffusivity and fractional anisotropy) produced more stable spatial components than when assessing metrics individually. Finally, we related individual subject weightings to demographic and behavioural measures using a partial least squares analysis. Through this approach we identified interpretable relationships between hippocampus microstructure and demographic and behavioural measures. Taken together, our work suggests non-negative matrix factorization as a spatially specific analytical approach for neuroimaging studies and advocates for the use of multiple metrics for data-driven component analyses

    Lognormal Distributions and Geometric Averages of Positive Definite Matrices

    Full text link
    This article gives a formal definition of a lognormal family of probability distributions on the set of symmetric positive definite (PD) matrices, seen as a matrix-variate extension of the univariate lognormal family of distributions. Two forms of this distribution are obtained as the large sample limiting distribution via the central limit theorem of two types of geometric averages of i.i.d. PD matrices: the log-Euclidean average and the canonical geometric average. These averages correspond to two different geometries imposed on the set of PD matrices. The limiting distributions of these averages are used to provide large-sample confidence regions for the corresponding population means. The methods are illustrated on a voxelwise analysis of diffusion tensor imaging data, permitting a comparison between the various average types from the point of view of their sampling variability.Comment: 28 pages, 8 figure
    • …
    corecore