26 research outputs found

    X-ray CT Image Reconstruction on Highly-Parallel Architectures.

    Full text link
    Model-based image reconstruction (MBIR) methods for X-ray CT use accurate models of the CT acquisition process, the statistics of the noisy measurements, and noise-reducing regularization to produce potentially higher quality images than conventional methods even at reduced X-ray doses. They do this by minimizing a statistically motivated high-dimensional cost function; the high computational cost of numerically minimizing this function has prevented MBIR methods from reaching ubiquity in the clinic. Modern highly-parallel hardware like graphics processing units (GPUs) may offer the computational resources to solve these reconstruction problems quickly, but simply "translating" existing algorithms designed for conventional processors to the GPU may not fully exploit the hardware's capabilities. This thesis proposes GPU-specialized image denoising and image reconstruction algorithms. The proposed image denoising algorithm uses group coordinate descent with carefully structured groups. The algorithm converges very rapidly: in one experiment, it denoises a 65 megapixel image in about 1.5 seconds, while the popular Chambolle-Pock primal-dual algorithm running on the same hardware takes over a minute to reach the same level of accuracy. For X-ray CT reconstruction, this thesis uses duality and group coordinate ascent to propose an alternative to the popular ordered subsets (OS) method. Similar to OS, the proposed method can use a subset of the data to update the image. Unlike OS, the proposed method is convergent. In one helical CT reconstruction experiment, an implementation of the proposed algorithm using one GPU converges more quickly than a state-of-the-art algorithm converges using four GPUs. Using four GPUs, the proposed algorithm reaches near convergence of a wide-cone axial reconstruction problem with over 220 million voxels in only 11 minutes.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113551/1/mcgaffin_1.pd

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    Acceleration Methods for MRI

    Full text link
    Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd

    Accelerated Statistical Image Reconstruction Algorithms and Simplified Cost Functions for X-ray Computed Tomography.

    Full text link
    Statistical image reconstruction methods are poised to replace traditional methods like filtered back-projection (FBP) in commercial X-ray computed tomography (CT) scanners. Statistical methods offer many advantages over FBP, including incorporating physical effects and physical constraints, modeling of complex imaging geometries, and imaging at lower X-ray doses. But, the use of statistical methods is limited due to many practical problems. This thesis proposes methods to improve four aspects of statistical methods: reconstruction time, beam hardening, non-negativity constraints, and organ motion. To reduce the reconstruction time, several novel iterative algorithms are proposed that are adapted to multi-core computing, including a hybrid ordered subsets (OS) / iterative coordinate descent (ICD) approach. This approach leads to a reduction in reconstruction time, and it also makes the ICD algorithm robust to the initial guess image. Statistical methods have accounted for beam hardening by using more information than needed by traditional FBP-based methods like the Joseph-Spital (JS) method. This thesis proposes a statistical method that uses exactly the same beam hardening information as the JS method while suppressing beam hardening artifacts. Directly imposing the non-negativity constraints can increase the computation time of algorithms such as the preconditioned conjugate gradient (PCG) method. This thesis proposes a modification of the penalized-likelihood cost function for monoenergetic transmission tomography, and a corresponding PCG algorithm, that reduce reconstruction time when enforcing nonnegativity. Organ motion during a scan causes image artifacts, and in some cases these artifacts are more apparent when standard statistical methods are used. A preliminary simulation study of a new approach to remove motion artifacts is presented. The distinguishing feature of this approach is that it does not require any new information from the scanner. The target applications of this research effort are 3-D volume reconstructions for axial cone-beam and helical cone-beam scanning geometries of multislice CT (MSCT) scanners.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60749/1/someshs_1.pd

    Model-based X-ray CT Image and Light Field Reconstruction Using Variable Splitting Methods.

    Full text link
    Model-based image reconstruction (MBIR) is a powerful technique for solving ill-posed inverse problems. Compared with direct methods, it can provide better estimates from noisy measurements and from incomplete data, at the cost of much longer computation time. In this work, we focus on accelerating and applying MBIR for solving reconstruction problems, including X-ray computed tomography (CT) image reconstruction and light field reconstruction, using variable splitting based on the augmented Lagrangian (AL) methods. For X-ray CT image reconstruction, we combine the AL method and ordered subsets (OS), a well-known technique in the medical imaging literature for accelerating tomographic reconstruction, by considering a linearized variant of the AL method and propose a fast splitting-based ordered-subset algorithm, OS-LALM, for solving X-ray CT image reconstruction problems with penalized weighted least-squares (PWLS) criterion. Practical issues such as the non-trivial parameter selection of AL methods and remarkable memory overhead when considering the finite difference image variable splitting are carefully studied, and several variants of the proposed algorithm are investigated for solving practical model-based X-ray CT image reconstruction problems. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and greatly reduces the noise-like OS artifacts in the reconstructed image when using many subsets for OS acceleration. For light field reconstruction, considering decomposing the camera imaging process into a linear convolution and a non-linear slicing operations for faster forward projection, we propose to reconstruct light field from a sequence of photos taken with different focus settings, i.e., a focal stack, using an alternating direction method of multipliers (ADMM). To improve the quality of the reconstructed light field, we also propose a signal-independent sparsifying transform by considering the elongated structure of light fields. Flatland simulation results show that our proposed sparse light field prior produces high resolution light field with fine details compared with other existing sparse priors for natural images.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108981/1/hungnien_1.pd

    Convolutional Analysis Operator Learning: Acceleration and Convergence

    Full text link
    Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets -- particularly with multi-layered structures, e.g., convolutional neural networks -- or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.Comment: 22 pages, 11 figures, fixed incorrect math theorem numbers in fig.

    Accelerated Optimization Algorithms for Statistical 3D X-ray Computed Tomography Image Reconstruction.

    Full text link
    X-ray computed tomography (CT) has been widely celebrated for its ability to visualize patient anatomy, but increasing radiation exposure to patients is a concern. Statistical image reconstruction algorithms in X-ray CT can provide improved image quality for reduced dose levels in contrast to the conventional filtered back-projection (FBP) methods. However, the statistical approach requires substantial computation time. Therefore, this dissertation focuses on developing fast iterative algorithms for statistical reconstruction. Ordered subsets (OS) methods have been used widely in tomography problems, because they reduce the computational cost by using only a subset of the measurement data per iteration. They are already used in commercial PET and SPECT products. However, OS methods require too long a reconstruction time in X-ray CT to be used routinely for every clinical CT scan. In this dissertation, two main approaches are proposed for accelerating OS algorithms, one that uses new optimization transfer approaches and one that combines with momentum algorithms. The first, the separable quadratic surrogates (SQS) methods, one widely used optimization transfer method with OS methods, have been accelerated in three different ways. Among them, a nonuniform (NU) SQS method encouraging larger step sizes for the voxels that are expected to change more has highly accelerated OS methods. Second, combining OS methods and momentum approaches (OS-momentum) in a way that reuses previous updates with almost negligible increased computation resulted in a very fast convergence rate. This version focused on using widely celebrated Nesterov's momentum methods. OS-momentum algorithms sometimes encountered instability, so diminishing step size rule has been adapted for improving the stability while preserving the fast convergence rate. To further accelerate OS-momentum algorithms, this dissertation proposes novel momentum methods that are twice as fast yet have remarkably simple implementations comparable to Nesterov's methods. In addition to OS-type algorithms, one variant of the block coordinate descent (BCD) algorithm, called Axial BCD (ABCD), is investigated, which is specifically designed for 3D CT geometry. Simulated and real patient 3D CT scans are used to examine the acceleration of the proposed algorithms.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/109007/1/kimdongh_1.pd
    corecore