3,825 research outputs found

    Fast tomographic inspection of cylindrical objects

    Get PDF
    This paper presents a method for improved analysis of objects with an axial symmetry using X-ray Computed Tomography (CT). Cylindrical coordinates about an axis fixed to the object form the most natural base to check certain characteristics of objects that contain such symmetry, as often occurs with industrial parts. The sampling grid corresponds with the object, allowing for down-sampling hence reducing the reconstruction time. This is necessary for in-line applications and fast quality inspection. With algebraic reconstruction it permits the use of a pre-computed initial volume perfectly suited to fit a series of scans where same-type objects can have different positions and orientations, as often encountered in an industrial setting. Weighted back-projection can also be included when some regions are more likely subject to change, to improve stability. Building on a Cartesian grid reconstruction code, the feasibility of reusing the existing ray-tracers is checked against other researches in the same field.Comment: 13 pages, 13 figures. submitted to Journal Of Nondestructive Evaluation (https://www.springer.com/journal/10921

    3D Forward and Back-Projection for X-Ray CT Using Separable Footprints

    Full text link
    Iterative methods for 3D image reconstruction have the potential to improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography (CT). However, the computation burden of 3D cone-beam forward and back-projectors is one of the greatest challenges facing practical adoption of iterative methods for X-ray CT. Moreover, projector accuracy is also important for iterative methods. This paper describes two new separable footprint (SF) projector methods that approximate the voxel footprint functions as 2D separable functions. Because of the separability of these footprint functions, calculating their integrals over a detector cell is greatly simplified and can be implemented efficiently. The SF-TR projector uses trapezoid functions in the transaxial direction and rectangular functions in the axial direction, whereas the SF-TT projector uses trapezoid functions in both directions. Simulations and experiments showed that both SF projector methods are more accurate than the distance-driven (DD) projector, which is a current state-of-the-art method in the field. The SF-TT projector is more accurate than the SF-TR projector for rays associated with large cone angles. The SF-TR projector has similar computation speed with the DD projector and the SF-TT projector is about two times slower.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85876/1/Fessler5.pd

    Fast imaging in non-standard X-ray computed tomography geometries

    Get PDF

    Fast GPU-Based Approach to Branchless Distance-Driven Projection and Back-Projection in Cone Beam CT

    Get PDF
    Modern CT image reconstruction algorithms rely on projection and back-projection operations to refine an image estimate in iterative image reconstruction. A widely-used state-of-the-art technique is distance-driven projection and back-projection. While the distance-driven technique yields superior image quality in iterative algorithms, it is a computationally demanding process. This has a detrimental effect on the relevance of the algorithms in clinical settings. A few methods have been proposed for enhancing the distance-driven technique in order to take advantage of modern computer hardware. This study explores a two-dimensional extension of the branchless method, which is a technique that does not compromise image quality. The extension of the branchless method is named “pre-projection integration” because it gets a performance boost by integrating the data before the projection and back-projection operations. It was written with Nvidia’s CUDA framework and carefully designed for massively parallel graphics processing units (GPUs). The performance and the image quality of the pre-projection integration method were analyzed. Both projection and back-projection are significantly faster with pre-projection integration. The image quality was analyzed using cone beam CT image reconstruction algorithms within Jeffrey Fessler’s Image Reconstruction Toolbox. Images produced from regularized, iterative image reconstruction algorithms using the pre-projection integration method show no significant artifacts

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Iterative Reconstruction of Cone-Beam Micro-CT Data

    Get PDF
    The use of x-ray computed tomography (CT) scanners has become widespread in both clinical and preclinical contexts. CT scanners can be used to noninvasively test for anatom- ical anomalies as well as to diagnose and monitor disease progression. However, the data acquired by a CT scanner must be reconstructed prior to use and interpretation. A recon- struction algorithm processes the data and outputs a three dimensional image representing the x-ray attenuation properties of the scanned object. The algorithms in most widespread use today are based on filtered backprojection (FBP) methods. These algorithms are rela- tively fast and work well on high quality data, but cannot easily handle data with missing projections or considerable amounts of noise. On the other hand, iterative reconstruction algorithms may offer benefits in such cases, but the computational burden associated with iterative reconstructions is prohibitive. In this work, we address this computational burden and present methods that make iterative reconstruction of high-resolution CT data possible in a reasonable amount of time. Our proposed techniques include parallelization, ordered subsets, reconstruction region restriction, and a modified version of the SIRT algorithm that reduces the overall run-time. When combining all of these techniques, we can reconstruct a 512 Ă— 512 Ă— 1022 image from acquired micro-CT data in less than thirty minutes

    Fast Variance Predictions for 3D Cone-Beam CT with Quadratic Regularization

    Full text link
    Fast and accurate variance/covariance predictions are useful for analyzing the statistical characteristics of the reconstructed images and may aid regularization parameters selection. The existing methods, the matrix-based method and its DFT approximations, are impractical for realistic data size in X-ray CT.We have previously addressed this problem in 2D fan-beam CT by proposing “analytical” approaches, the simplest of which requires computation equivalent to one backprojection and some summations. This paper extends these approaches to 3D step-and-shoot “cylindrical” cone-beam CT.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85926/1/Fessler225.pd

    Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-Ring PET Insert System

    Get PDF
    X-ray computed tomography: CT) and positron emission tomography: PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies
    • …
    corecore