2,278 research outputs found
Spatially regularized multi-exponential transverse relaxation times estimation from magnitude MRI images under Rician noise
International audienceSynopsis This work aims at improving the estimation of multi-exponential transverse relaxation times from noisy magnitude MRI images. A spatially regularized Maximum-Likelihood estimator accounting for the Rician distribution of the noise was introduced. This approach is compared to a Rician corrected least-square criterion with the introduction of spatial regularization. To deal with the large-scale optimization problem, a majoration-minimization approach was used, allowing the implementation of both the maximum-likelihood estimator and the spatial regularization. The importance of the regularization alongside the rician noise incorporation is shown both visually and numerically on magnitude MRI images acquired on fruit samples. Purpose Multi-exponential relaxation times and their associated amplitudes in an MRI image provide very useful information for assessing the constituents of the imaged sample. Typical examples are the detection of water compartments of plant tissues and the quanti cation of myelin water fraction for multiple sclerosis disease diagnosis. The estimation of the multi-exponential signal model from magnitude MRI images faces the problem of a relatively low signal to noise ratio (SNR), with a Rician distributed noise and a large-scale optimization problem when dealing with the entire image. Actually, maps are composed of coherent regions with smooth variations between neighboring voxels. This study proposes an e cient reconstruction method of values and amplitudes from magnitude images by incorporating this information in order to reduce the noise e ect. The main feature of the method is to use a regularized maximum likelihood estimator derived from a Rician likelihood and a Majorization-Minimization approach coupled with the Levenberg-Marquardt algorithm to solve the large-scale optimization problem. Tests were conducted on apples and the numerical results are given to illustrate the relevance of this method and to discuss its performances. Methods For each voxel of the MRI image, the measured signal at echo time is represented by a multi-exponential model: with The data are subject to an additive Gaussian noise in the complex domain and therefore magnitude MRI data follows a Rician distribution : is the rst kind modi ed Bessel function of order 0 and is the standard deviation of the noise which is usually estimated from the image background. For an MRI image with voxels, the model parameters are usually estimated by minimizing the least-squares (LS) criterion under the assumption of a Gaussian noise using nonlinear LS solvers such as Levenberg-Marquardt (LM). However, this approach does not yield satisfying results when applied to magnitude data. Several solutions to overcome this issue are proposed by adding a correction term to the LS criterion. In this study, the retained correction uses the expectation value of data model under the hypothesis of Rician distribution since it outperforms the other correction strategies: stands for the sum of squares. We refer to this method as Rician corrected LS (RCLS). A more direct way for solving this estimation problem is to use a maximum likelihood (ML) estimator which comes down to minimize: To solve this optimization problem when dealing with the entire image, a majorization-minimization (MM) technique was adopted. The resulting MM-ML algorithm is summarized in gure 1, the LM algorithm used in this method minimizes a set of LS criteria derived from the quadratic majorization strategy. A spatial regularization term based on a cost function was also added to both criteria (and) to ensure spatial smoothness of the estimated maps. In order to reduce the numerical complexity by maintaining variable separability between each voxel and it's neighboring voxels , the function is majorized by : where stands for the iteration number of the iterative optimization algorithm
Automatic, fast and robust characterization of noise distributions for diffusion MRI
Knowledge of the noise distribution in magnitude diffusion MRI images is the
centerpiece to quantify uncertainties arising from the acquisition process. The
use of parallel imaging methods, the number of receiver coils and imaging
filters applied by the scanner, amongst other factors, dictate the resulting
signal distribution. Accurate estimation beyond textbook Rician or noncentral
chi distributions often requires information about the acquisition process
(e.g. coils sensitivity maps or reconstruction coefficients), which is not
usually available. We introduce a new method where a change of variable
naturally gives rise to a particular form of the gamma distribution for
background signals. The first moments and maximum likelihood estimators of this
gamma distribution explicitly depend on the number of coils, making it possible
to estimate all unknown parameters using only the magnitude data. A rejection
step is used to make the method automatic and robust to artifacts. Experiments
on synthetic datasets show that the proposed method can reliably estimate both
the degrees of freedom and the standard deviation. The worst case errors range
from below 2% (spatially uniform noise) to approximately 10% (spatially
variable noise). Repeated acquisitions of in vivo datasets show that the
estimated parameters are stable and have lower variances than compared methods.Comment: v2: added publisher DOI statement, fixed text typo in appendix A
The Strehl Ratio in Adaptive Optics Images: Statistics and Estimation
Statistical properties of the intensity in adaptive optics images are usually
modeled with a Rician distribution. We study the central point of the image,
where this model is inappropriate for high to very high correction levels. The
central point is an important problem because it gives the Strehl ratio
distribution. We show that the central point distribution can be modeled using
a non-central Gamma distribution.Comment: 8 pages, 5 figure
Data augmentation in Rician noise model and Bayesian Diffusion Tensor Imaging
Mapping white matter tracts is an essential step towards understanding brain
function. Diffusion Magnetic Resonance Imaging (dMRI) is the only noninvasive
technique which can detect in vivo anisotropies in the 3-dimensional diffusion
of water molecules, which correspond to nervous fibers in the living brain. In
this process, spectral data from the displacement distribution of water
molecules is collected by a magnetic resonance scanner. From the statistical
point of view, inverting the Fourier transform from such sparse and noisy
spectral measurements leads to a non-linear regression problem. Diffusion
tensor imaging (DTI) is the simplest modeling approach postulating a Gaussian
displacement distribution at each volume element (voxel). Typically the
inference is based on a linearized log-normal regression model that can fit the
spectral data at low frequencies. However such approximation fails to fit the
high frequency measurements which contain information about the details of the
displacement distribution but have a low signal to noise ratio. In this paper,
we directly work with the Rice noise model and cover the full range of
-values. Using data augmentation to represent the likelihood, we reduce the
non-linear regression problem to the framework of generalized linear models.
Then we construct a Bayesian hierarchical model in order to perform
simultaneously estimation and regularization of the tensor field. Finally the
Bayesian paradigm is implemented by using Markov chain Monte Carlo.Comment: 37 pages, 3 figure
Statistical Analysis of a Posteriori Channel and Noise Distribution Based on HARQ Feedback
In response to a comment on one of our manuscript, this work studies the
posterior channel and noise distributions conditioned on the NACKs and ACKs of
all previous transmissions in HARQ system with statistical approaches. Our main
result is that, unless the coherence interval (time or frequency) is large as
in block-fading assumption, the posterior distribution of the channel and noise
either remains almost identical to the prior distribution, or it mostly follows
the same class of distribution as the prior one. In the latter case, the
difference between the posterior and prior distribution can be modeled as some
parameter mismatch, which has little impact on certain type of applications.Comment: 15 pages, 2 figures, 4 table
Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization
Spherical deconvolution (SD) methods are widely used to estimate the
intra-voxel white-matter fiber orientations from diffusion MRI data. However,
while some of these methods assume a zero-mean Gaussian distribution for the
underlying noise, its real distribution is known to be non-Gaussian and to
depend on the methodology used to combine multichannel signals. Indeed, the two
prevailing methods for multichannel signal combination lead to Rician and
noncentral Chi noise distributions. Here we develop a Robust and Unbiased
Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with
realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to
Rician and noncentral Chi likelihood models. To quantify the benefits of using
proper noise models, RUMBA-SD was compared with dRL-SD, a well-established
method based on the RL algorithm for Gaussian noise. Another aim of the study
was to quantify the impact of including a total variation (TV) spatial
regularization term in the estimation framework. To do this, we developed TV
spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The
evaluation was performed by comparing various quality metrics on 132
three-dimensional synthetic phantoms involving different inter-fiber angles and
volume fractions, which were contaminated with noise mimicking patterns
generated by data processing in multichannel scanners. The results demonstrate
that the inclusion of proper likelihood models leads to an increased ability to
resolve fiber crossings with smaller inter-fiber angles and to better detect
non-dominant fibers. The inclusion of TV regularization dramatically improved
the resolution power of both techniques. The above findings were also verified
in brain data
Bayesian uncertainty quantification in linear models for diffusion MRI
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue
microstructure. By fitting a model to the dMRI signal it is possible to derive
various quantitative features. Several of the most popular dMRI signal models
are expansions in an appropriately chosen basis, where the coefficients are
determined using some variation of least-squares. However, such approaches lack
any notion of uncertainty, which could be valuable in e.g. group analyses. In
this work, we use a probabilistic interpretation of linear least-squares
methods to recast popular dMRI models as Bayesian ones. This makes it possible
to quantify the uncertainty of any derived quantity. In particular, for
quantities that are affine functions of the coefficients, the posterior
distribution can be expressed in closed-form. We simulated measurements from
single- and double-tensor models where the correct values of several quantities
are known, to validate that the theoretically derived quantiles agree with
those observed empirically. We included results from residual bootstrap for
comparison and found good agreement. The validation employed several different
models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI)
and Constrained Spherical Deconvolution (CSD). We also used in vivo data to
visualize maps of quantitative features and corresponding uncertainties, and to
show how our approach can be used in a group analysis to downweight subjects
with high uncertainty. In summary, we convert successful linear models for dMRI
signal estimation to probabilistic models, capable of accurate uncertainty
quantification.Comment: Added results from a group analysis and a comparison with residual
bootstra
Short Packets over Block-Memoryless Fading Channels: Pilot-Assisted or Noncoherent Transmission?
We present nonasymptotic upper and lower bounds on the maximum coding rate
achievable when transmitting short packets over a Rician memoryless
block-fading channel for a given requirement on the packet error probability.
We focus on the practically relevant scenario in which there is no \emph{a
priori} channel state information available at the transmitter and at the
receiver. An upper bound built upon the min-max converse is compared to two
lower bounds: the first one relies on a noncoherent transmission strategy in
which the fading channel is not estimated explicitly at the receiver; the
second one employs pilot-assisted transmission (PAT) followed by
maximum-likelihood channel estimation and scaled mismatched nearest-neighbor
decoding at the receiver. Our bounds are tight enough to unveil the optimum
number of diversity branches that a packet should span so that the energy per
bit required to achieve a target packet error probability is minimized, for a
given constraint on the code rate and the packet size. Furthermore, the bounds
reveal that noncoherent transmission is more energy efficient than PAT, even
when the number of pilot symbols and their power is optimized. For example, for
the case when a coded packet of symbols is transmitted using a channel
code of rate bits/channel use, over a block-fading channel with block
size equal to symbols, PAT requires an additional dB of energy per
information bit to achieve a packet error probability of compared to
a suitably designed noncoherent transmission scheme. Finally, we devise a PAT
scheme based on punctured tail-biting quasi-cyclic codes and ordered statistics
decoding, whose performance are close ( dB gap at packet error
probability) to the ones predicted by our PAT lower bound. This shows that the
PAT lower bound provides useful guidelines on the design of actual PAT schemes.Comment: 30 pages, 5 figures, journa
- …