On noise, uncertainty and inference for computational diffusion MRI

Abstract

Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    Similar works