124 research outputs found

    Spatially regularized multi-exponential transverse relaxation times estimation from magnitude MRI images under Rician noise

    Get PDF
    International audienceSynopsis This work aims at improving the estimation of multi-exponential transverse relaxation times from noisy magnitude MRI images. A spatially regularized Maximum-Likelihood estimator accounting for the Rician distribution of the noise was introduced. This approach is compared to a Rician corrected least-square criterion with the introduction of spatial regularization. To deal with the large-scale optimization problem, a majoration-minimization approach was used, allowing the implementation of both the maximum-likelihood estimator and the spatial regularization. The importance of the regularization alongside the rician noise incorporation is shown both visually and numerically on magnitude MRI images acquired on fruit samples. Purpose Multi-exponential relaxation times and their associated amplitudes in an MRI image provide very useful information for assessing the constituents of the imaged sample. Typical examples are the detection of water compartments of plant tissues and the quanti cation of myelin water fraction for multiple sclerosis disease diagnosis. The estimation of the multi-exponential signal model from magnitude MRI images faces the problem of a relatively low signal to noise ratio (SNR), with a Rician distributed noise and a large-scale optimization problem when dealing with the entire image. Actually, maps are composed of coherent regions with smooth variations between neighboring voxels. This study proposes an e cient reconstruction method of values and amplitudes from magnitude images by incorporating this information in order to reduce the noise e ect. The main feature of the method is to use a regularized maximum likelihood estimator derived from a Rician likelihood and a Majorization-Minimization approach coupled with the Levenberg-Marquardt algorithm to solve the large-scale optimization problem. Tests were conducted on apples and the numerical results are given to illustrate the relevance of this method and to discuss its performances. Methods For each voxel of the MRI image, the measured signal at echo time is represented by a multi-exponential model: with The data are subject to an additive Gaussian noise in the complex domain and therefore magnitude MRI data follows a Rician distribution : is the rst kind modi ed Bessel function of order 0 and is the standard deviation of the noise which is usually estimated from the image background. For an MRI image with voxels, the model parameters are usually estimated by minimizing the least-squares (LS) criterion under the assumption of a Gaussian noise using nonlinear LS solvers such as Levenberg-Marquardt (LM). However, this approach does not yield satisfying results when applied to magnitude data. Several solutions to overcome this issue are proposed by adding a correction term to the LS criterion. In this study, the retained correction uses the expectation value of data model under the hypothesis of Rician distribution since it outperforms the other correction strategies: stands for the sum of squares. We refer to this method as Rician corrected LS (RCLS). A more direct way for solving this estimation problem is to use a maximum likelihood (ML) estimator which comes down to minimize: To solve this optimization problem when dealing with the entire image, a majorization-minimization (MM) technique was adopted. The resulting MM-ML algorithm is summarized in gure 1, the LM algorithm used in this method minimizes a set of LS criteria derived from the quadratic majorization strategy. A spatial regularization term based on a cost function was also added to both criteria (and) to ensure spatial smoothness of the estimated maps. In order to reduce the numerical complexity by maintaining variable separability between each voxel and it's neighboring voxels , the function is majorized by : where stands for the iteration number of the iterative optimization algorithm

    Iterative Image Reconstruction in MRI with Separate Magnitude and Phase Regularization

    Full text link
    Iterative methods for image reconstruction in MRI are useful in several applications, including reconstruction from non-Cartesian k-space samples, compensation for magnetic field inhomogeneities, and imaging with multiple receive coils. Existing iterative MR image reconstruction methods are either unregularized, and therefore sensitive to noise, or have used regularization methods that smooth the complex valued image. These existing methods regularize the real and imaginary components of the image equally. In many MRI applications, including T2*-weighted imaging as used in fMRI BOLD imaging, one expects most of the signal information of interest to be contained in the magnitude of the voxel value, whereas the phase values are expected to vary smoothly spatially. This paper proposes separate regularization of the magnitude and phase components, preserving the spatial resolution of the magnitude component while strongly regularizing the phase component. This leads to a non-convex regularized least-squares cost function. We describe a new iterative algorithm that monotonically decreases this cost function. The resulting images have reduced noise relative to conventional regularization methods.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85802/1/Fessler194.pd

    Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation

    Full text link
    Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.Comment: 5 pages, 5 figure

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Challenges of Big Data Analysis

    Full text link
    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article give overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasis on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions

    An efficient kk-means-type algorithm for clustering datasets with incomplete records

    Get PDF
    The kk-means algorithm is arguably the most popular nonparametric clustering method but cannot generally be applied to datasets with incomplete records. The usual practice then is to either impute missing values under an assumed missing-completely-at-random mechanism or to ignore the incomplete records, and apply the algorithm on the resulting dataset. We develop an efficient version of the kk-means algorithm that allows for clustering in the presence of incomplete records. Our extension is called kmk_m-means and reduces to the kk-means algorithm when all records are complete. We also provide initialization strategies for our algorithm and methods to estimate the number of groups in the dataset. Illustrations and simulations demonstrate the efficacy of our approach in a variety of settings and patterns of missing data. Our methods are also applied to the analysis of activation images obtained from a functional Magnetic Resonance Imaging experiment.Comment: 21 pages, 12 figures, 3 tables, in press, Statistical Analysis and Data Mining -- The ASA Data Science Journal, 201
    corecore