4,091 research outputs found

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Multi-dimensional extension of the alternating minimization algorithm in x-ray computed tomography

    Get PDF
    X-ray computed tomography (CT) is an important and effective tool in medical and industrial imaging applications. The state-of-the-art methods to reconstruct CT images have had great development but also face challenges. This dissertation derives novel algorithms to reduce bias and metal artifacts in a wide variety of imaging modalities and increase performance in low-dose scenarios. The most widely available CT systems still use the single-energy CT (SECT), which is good at showing the anatomic structure of the patient body. However, in SECT image reconstruction, energy-related information is lost. In applications like radiation treatment planning and dose prediction, accurate energy-related information is needed. Spectral CT has shown the potential to extract energy-related information. Dual-energy CT (DECT) is the first successful implementation of spectral CT. By using two different spectra, the energy-related information can be exported by reconstructing basis-material images. A sinogram-based decomposition method has shown good performance in clinical applications. However, when the x-ray dose level is low, the sinogram-based decomposition methods generate biased estimates. The bias increases rapidly when the dose level decreases. The bias comes from the ill-posed statistical model in the sinogram-decomposition method. To eliminate the bias in low-dose cases, a joint statistical image reconstruction (JSIR) method using the dual-energy alternating minimization (DEAM) algorithm is proposed. By correcting the ill-posed statistical model, a relative error as high as 15% in the sinogram-based decomposition method can be reduced to less than 1% with DEAM, which is an approximately unbiased estimation. Photon counting CT (PCCT) is an emerging CT technique that also can resolve the energy information. By using photon-counting detectors (PCD), PCCT keeps track of the energy of every photon received. Though PCDs have an entirely different physical performance from the energy-integrating detectors used in DECT, the problem of biased estimation with the sinogram-decomposition method remains. Based on DEAM, a multi-energy alternating minimization (MEAM) algorithm for PCCT is proposed. In the simulation experiments, MEAM can effectively reduce bias by more than 90%. Metal artifacts have been a concern since x-ray CT came into medical imaging. When there exist dense or metal materials in the scanned object, the image quality may suffer severe artifacts. The auxiliary sinogram alternating minimization (ASAM) algorithm is proposed to take advantages of two major categories of methods to deal with metal artifacts: the pre-processing method and statistical image reconstruction. With a phantom experiment, it has been shown that ASAM has better metal-artifact reduction performance compared with the current methods. A significant challenge in security imaging is that due to the large geometry and power consumption, low photon statistics are detected. The detected photons suffer high noise and heavy artifacts. Image-domain regularized iterative reconstruction algorithms can reduce the noise but also result in biased reconstruction. A wavelet-domain penalty is introduced which does not bring in bias and can effectively eliminate steaking artifacts. By combining the image-domain and wavelet-domain penalty, the image quality can be further improved. When the wavelet penalty is used, a concern is that no empirical way, like in the image-domain penalty, is available to determine the penalty weight. Laplace variational automatic relevance determination (Lap-VARD) method is proposed to reconstruct the image and optimal penalty weight choice at the same time

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Blind Ptychographic Phase Retrieval via Convergent Alternating Direction Method of Multipliers

    Get PDF
    Ptychography has risen as a reference X-ray imaging technique: it achieves resolutions of one billionth of a meter, macroscopic field of view, or the capability to retrieve chemical or magnetic contrast, among other features. A ptychographyic reconstruction is normally formulated as a blind phase retrieval problem, where both the image (sample) and the probe (illumination) have to be recovered from phaseless measured data. In this article we address a nonlinear least squares model for the blind ptychography problem with constraints on the image and the probe by maximum likelihood estimation of the Poisson noise model. We formulate a variant model that incorporates the information of phaseless measurements of the probe to eliminate possible artifacts. Next, we propose a generalized alternating direction method of multipliers designed for the proposed nonconvex models with convergence guarantee under mild conditions, where their subproblems can be solved by fast element-wise operations. Numerically, the proposed algorithm outperforms state-of-the-art algorithms in both speed and image quality.Comment: 23 page
    corecore