59 research outputs found

    Acceleration Methods for MRI

    Full text link
    Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd

    Restoration of images based on subspace optimization accelerating augmented Lagrangian approach

    Get PDF
    AbstractWe propose a new fast algorithm for solving a TV-based image restoration problem. Our approach is based on merging subspace optimization methods into an augmented Lagrangian method. The proposed algorithm can be seen as a variant of the ALM (Augmented Lagrangian Method), and the convergence properties are analyzed from a DRS (Douglas–Rachford splitting) viewpoint. Experiments on a set of image restoration benchmark problems show that the proposed algorithm is a strong contender for the current state of the art methods

    Efficient Model-Based Reconstruction for Dynamic MRI

    Full text link
    Dynamic magnetic resonance imaging (MRI) has important clinical and neuro- science applications (e.g., cardiac disease diagnosis, neurological behavior studies). It captures an object in motion by acquiring data across time, then reconstructing a sequence of images from them. This dissertation considers efficient dynamic MRI reconstruction using handcrafted models, to achieve fast imaging with high spatial and temporal resolution. Our modeling framework considers data acquisition process, image properties, and artifact correction. The reconstruction model expressed as a large-scale inverse problem requires optimization algorithms to solve, and we consider efficient implementations that make use of underlying problem structures. In the context of dynamic MRI reconstruction, we investigate efficient updates in two frameworks of algorithms for solving a nonsmooth composite convex optimization problem for the low-rank plus sparse (L+S) model. In the proximal gradient framework, current algorithms for the L+S model involve the classical iterative soft thresholding algorithm (ISTA); we consider two accelerated alternatives, one based on the fast iterative shrinkage-thresholding algorithm (FISTA), and the other with the recent proximal optimized gradient method (POGM). In the augmented Lagrangian (AL) framework, we propose an efficient variable splitting scheme based on the form of the data acquisition operator, leading to simpler computation than the conjugate gradient (CG) approach required by existing AL methods. Numerical results suggest faster convergence of our efficient implementations in both frameworks, with POGM providing the fastest convergence overall and the practical benefit of being free of algorithm tuning parameters. In the context of magnetic field inhomogeneity correction, we present an efficient algorithm for a regularized field inhomogeneity estimation problem. Most existing minimization techniques are computationally or memory intensive for 3D datasets, and are designed for single-coil MRI. We consider 3D MRI with optional consideration of coil sensitivity and a generalized expression that addresses both multi-echo field map estimation and water-fat imaging. Our efficient algorithm uses a preconditioned nonlinear conjugate gradient method based on an incomplete Cholesky factorization of the Hessian of the cost function, along with a monotonic line search. Numerical experiments show the computational advantage of the proposed algorithm over state- of-the-art methods with similar memory requirements. In the context of task-based functional MRI (fMRI) reconstruction, we introduce a space-time model that represents an fMRI timeseries as a sum of task-correlated signal and non-task background. Our model consists of a spatiotemporal decomposition based on assumptions of the activation waveform shape, with spatial and temporal smoothness regularization on the magnitude and phase of the timeseries. Compared with two contemporary task fMRI decomposition models, our proposed model yields better timeseries and activation maps on simulated and human subject fMRI datasets with multiple tasks. The above examples are part of a larger framework for model-based dynamic MRI reconstruction. This dissertation concludes by presenting a general framework with flexibility on model assumptions and artifact compensation options (e.g., field inhomogeneity, head motion), and proposing future work ideas on both the framework and its connection to data acquisition.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168081/1/yilinlin_1.pd

    Local monotone operator learning using non-monotone operators: MnM-MOL

    Full text link
    The recovery of magnetic resonance (MR) images from undersampled measurements is a key problem that has seen extensive research in recent years. Unrolled approaches, which rely on end-to-end training of convolutional neural network (CNN) blocks within iterative reconstruction algorithms, offer state-of-the-art performance. These algorithms require a large amount of memory during training, making them difficult to employ in high-dimensional applications. Deep equilibrium (DEQ) models and the recent monotone operator learning (MOL) approach were introduced to eliminate the need for unrolling, thus reducing the memory demand during training. Both approaches require a Lipschitz constraint on the network to ensure that the forward and backpropagation iterations converge. Unfortunately, the constraint often results in reduced performance compared to unrolled methods. The main focus of this work is to relax the constraint on the CNN block in two different ways. Inspired by convex-non-convex regularization strategies, we now impose the monotone constraint on the sum of the gradient of the data term and the CNN block, rather than constrain the CNN itself to be a monotone operator. This approach enables the CNN to learn possibly non-monotone score functions, which can translate to improved performance. In addition, we only restrict the operator to be monotone in a local neighborhood around the image manifold. Our theoretical results show that the proposed algorithm is guaranteed to converge to the fixed point and that the solution is robust to input perturbations, provided that it is initialized close to the true solution. Our empirical results show that the relaxed constraints translate to improved performance and that the approach enjoys robustness to input perturbations similar to MOL.Comment: 10 pages, 7 figure
    • …
    corecore