2,568 research outputs found
Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution
We propose two strategies to improve the quality of tractography results
computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both
methods are based on the same PDE framework, defined in the coupled space of
positions and orientations, associated with a stochastic process describing the
enhancement of elongated structures while preserving crossing structures. In
the first method we use the enhancement PDE for contextual regularization of a
fiber orientation distribution (FOD) that is obtained on individual voxels from
high angular resolution diffusion imaging (HARDI) data via constrained
spherical deconvolution (CSD). Thereby we improve the FOD as input for
subsequent tractography. Secondly, we introduce the fiber to bundle coherence
(FBC), a measure for quantification of fiber alignment. The FBC is computed
from a tractography result using the same PDE framework and provides a
criterion for removing the spurious fibers. We validate the proposed
combination of CSD and enhancement on phantom data and on human data, acquired
with different scanning protocols. On the phantom data we find that PDE
enhancements improve both local metrics and global metrics of tractography
results, compared to CSD without enhancements. On the human data we show that
the enhancements allow for a better reconstruction of crossing fiber bundles
and they reduce the variability of the tractography output with respect to the
acquisition parameters. Finally, we show that both the enhancement of the FODs
and the use of the FBC measure on the tractography improve the stability with
respect to different stochastic realizations of probabilistic tractography.
This is shown in a clinical application: the reconstruction of the optic
radiation for epilepsy surgery planning
Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors
Sparsity has become a key concept for solving of high-dimensional inverse
problems using variational regularization techniques. Recently, using similar
sparsity-constraints in the Bayesian framework for inverse problems by encoding
them in the prior distribution has attracted attention. Important questions
about the relation between regularization theory and Bayesian inference still
need to be addressed when using sparsity promoting inversion. A practical
obstacle for these examinations is the lack of fast posterior sampling
algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full
range of Bayesian inference methods requires being able to draw samples from
the posterior probability distribution in a fast and efficient way. This is
usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this
article, we develop and examine a new implementation of a single component
Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that
the efficiency of our Gibbs sampler increases when the level of sparsity or the
dimension of the unknowns is increased. This property is contrary to the
properties of the most commonly applied Metropolis-Hastings (MH) sampling
schemes: We demonstrate that the efficiency of MH schemes for L1-type priors
dramatically decreases when the level of sparsity or the dimension of the
unknowns is increased. Practically, Bayesian inversion for L1-type priors using
MH samplers is not feasible at all. As this is commonly believed to be an
intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also
challenges common beliefs about the applicability of sample based Bayesian
inference.Comment: 33 pages, 14 figure
Edge-Preserving Tomographic Reconstruction with Nonlocal Regularization
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85989/1/Fessler73.pd
A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations
Many scientific and engineering problems require to perform Bayesian
inferences in function spaces, in which the unknowns are of infinite dimension.
In such problems, choosing an appropriate prior distribution is an important
task. In particular we consider problems where the function to infer is subject
to sharp jumps which render the commonly used Gaussian measures unsuitable. On
the other hand, the so-called total variation (TV) prior can only be defined in
a finite dimensional setting, and does not lead to a well-defined posterior
measure in function spaces. In this work we present a TV-Gaussian (TG) prior to
address such problems, where the TV term is used to detect sharp jumps of the
function, and the Gaussian distribution is used as a reference measure so that
it results in a well-defined posterior measure in the function space. We also
present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples
from the posterior distribution of the TG prior. With numerical examples we
demonstrate the performance of the TG prior and the efficiency of the proposed
MCMC algorithm
Unsupervised bayesian convex deconvolution based on a field with an explicit partition function
This paper proposes a non-Gaussian Markov field with a special feature: an
explicit partition function. To the best of our knowledge, this is an original
contribution. Moreover, the explicit expression of the partition function
enables the development of an unsupervised edge-preserving convex deconvolution
method. The method is fully Bayesian, and produces an estimate in the sense of
the posterior mean, numerically calculated by means of a Monte-Carlo Markov
Chain technique. The approach is particularly effective and the computational
practicability of the method is shown on a simple simulated example
- …