798 research outputs found

    Statistical X-Ray-Computed Tomography Image Reconstruction with Beam- Hardening Correction

    Full text link
    This paper describes two statistical iterative reconstruction methods for X-ray CT. The rst method assumes a mono-energetic model for X-ray attenuation. We approximate the transmission Poisson likelihood by a quadratic cost function and exploit its convexity to derive a separable quadratic surrogate function that is easily minimized using parallelizable algorithms. Ordered subsets are used to accelerate convergence. We apply this mono-energetic algorithm (with edge-preserving regularization) to simulated thorax X-ray CT scans. A few iterations produce reconstructed images with lower noise than conventional FBP images at equivalent resolutions. The second method generalizes the physical model and accounts for the poly-energetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume the object consists of a given number of nonoverlapping tissue types. The attenuation coeÆcient of each tissue is the product of its unknown density and a known energy-dependent mass attenuation coeÆcient. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown densities in each voxel. Applying this method to simulated X-ray CT measurements of a phantom containing both bone and soft tissue yields images with signi cantly reduced beam hardening artifacts.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85939/1/Fessler165.pd

    Statistical Image Reconstruction for Polyenergetic X-Ray Computed Tomography

    Full text link
    This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85895/1/Fessler74.pd

    Convergent Incremental Optimization Transfer Algorithms: Application to Tomography

    Full text link
    No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters , and methods based on the incremental expectation-maximization (EM) approach . This paper generalizes the incremental EM approach by introducing a general framework, "incremental optimization transfer". The proposed algorithms accelerate convergence speeds and ensure global convergence without requiring relaxation parameters. The general optimization transfer framework allows the use of a very broad family of surrogate functions, enabling the development of new algorithms . This paper provides the first convergent OS-type algorithm for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS) which yield closed-form maximization steps. We found it is very effective to achieve fast convergence rates by starting with an OS algorithm with a large number of subsets and switching to the new "transmission incremental optimization transfer (TRIOT)" algorithm. Results show that TRIOT is faster in increasing the PL objective than nonincremental ordinary SPS and even OS-SPS yet is convergent.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85980/1/Fessler46.pd

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    X-ray CT Image Reconstruction on Highly-Parallel Architectures.

    Full text link
    Model-based image reconstruction (MBIR) methods for X-ray CT use accurate models of the CT acquisition process, the statistics of the noisy measurements, and noise-reducing regularization to produce potentially higher quality images than conventional methods even at reduced X-ray doses. They do this by minimizing a statistically motivated high-dimensional cost function; the high computational cost of numerically minimizing this function has prevented MBIR methods from reaching ubiquity in the clinic. Modern highly-parallel hardware like graphics processing units (GPUs) may offer the computational resources to solve these reconstruction problems quickly, but simply "translating" existing algorithms designed for conventional processors to the GPU may not fully exploit the hardware's capabilities. This thesis proposes GPU-specialized image denoising and image reconstruction algorithms. The proposed image denoising algorithm uses group coordinate descent with carefully structured groups. The algorithm converges very rapidly: in one experiment, it denoises a 65 megapixel image in about 1.5 seconds, while the popular Chambolle-Pock primal-dual algorithm running on the same hardware takes over a minute to reach the same level of accuracy. For X-ray CT reconstruction, this thesis uses duality and group coordinate ascent to propose an alternative to the popular ordered subsets (OS) method. Similar to OS, the proposed method can use a subset of the data to update the image. Unlike OS, the proposed method is convergent. In one helical CT reconstruction experiment, an implementation of the proposed algorithm using one GPU converges more quickly than a state-of-the-art algorithm converges using four GPUs. Using four GPUs, the proposed algorithm reaches near convergence of a wide-cone axial reconstruction problem with over 220 million voxels in only 11 minutes.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113551/1/mcgaffin_1.pd
    • …
    corecore