129 research outputs found

    Iterative CT reconstruction from few projections for the nondestructive post irradiation examination of nuclear fuel assemblies

    Get PDF
    The core components (e.g. fuel assemblies, spacer grids, control rods) of the nuclear reactors encounter harsh environment due to high temperature, physical stress, and a tremendous level of radiation. The integrity of these elements is crucial for safe operation of the nuclear power plants. The Post Irradiation Examination (PIE) can reveal information about the integrity of the elements during normal operations and off‐normal events. Computed tomography (CT) is a tool for evaluating the structural integrity of elements non-destructively. CT requires many projections to be acquired from different view angles after which a mathematical algorithm is adopted for reconstruction. Obtaining many projections is laborious and expensive in nuclear industries. Reconstructions from a small number of projections are explored to achieve faster and cost-efficient PIE. Classical reconstruction algorithms (e.g. filtered back projection) cannot offer stable reconstructions from few projections and create severe streaking artifacts. In this thesis, conventional algorithms are reviewed, and new algorithms are developed for reconstructions of the nuclear fuel assemblies using few projections. CT reconstruction from few projections falls into two categories: the sparse-view CT and the limited-angle CT or tomosynthesis. Iterative reconstruction algorithms are developed for both cases in the field of compressed sensing (CS). The performance of the algorithms is assessed using simulated projections and validated through real projections. The thesis also describes the systematic strategy towards establishing the conditions of reconstructions and finds the optimal imaging parameters for reconstructions of the fuel assemblies from few projections. --Abstract, page iii

    Development and implementation of efficient noise suppression methods for emission computed tomography

    Get PDF
    In PET and SPECT imaging, iterative reconstruction is now widely used due to its capability of incorporating into the reconstruction process a physics model and Bayesian statistics involved in photon detection. Iterative reconstruction methods rely on regularization terms to suppress image noise and render radiotracer distribution with good image quality. The choice of regularization method substantially affects the appearances of reconstructed images, and is thus a critical aspect of the reconstruction process. Major contributions of this work include implementation and evaluation of various new regularization methods. Previously, our group developed a preconditioned alternating projection algorithm (PAPA) to optimize the emission computed tomography (ECT) objective function with the non-differentiable total variation (TV) regularizer. The algorithm was modified to optimize the proposed reconstruction objective functions. First, two novel TV-based regularizers—high-order total variation (HOTV) and infimal convolution total variation (ICTV)—were proposed as alternative choices to the customary TV regularizer in SPECT reconstruction, to reduce “staircase” artifacts produced by TV. We have evaluated both proposed reconstruction methods (HOTV-PAPA and ICTV-PAPA), and compared them with the TV regularized reconstruction (TV-PAPA) and the clinical standard, Gaussian post-filtered, expectation-maximization reconstruction method (GPF-EM) using both Monte Carlo-simulated data and anonymized clinical data. Model-observer studies using Monte Carlo-simulated data indicate that ICTV-PAPA is able to reconstruct images with similar or better lesion detectability, compared with clinical standard GPF-EM methods, but at lower detected count levels. This implies that switching from GPF-EM to ICTV-PAPA can reduce patient dose while maintaining image quality for diagnostic use. Second, the 1 norm of discrete cosine transform (DCT)-induced framelet regularization was studied. We decomposed the image into high and low spatial-frequency components, and then preferentially penalized the high spatial-frequency components. The DCT-induced framelet transform of the natural radiotracer distribution image is sparse. By using this property, we were able to effectively suppress image noise without overly compromising spatial resolution or image contrast. Finally, the fractional norm of the first-order spatial gradient was introduced as a regularizer. We implemented 2/3 and 1/2 norms to suppress image spatial variability. Due to the strong penalty of small differences between neighboring pixels, fractional-norm regularizers suffer from similar cartoon-like artifacts as with the TV regularizer. However, when penalty weights are properly selected, fractional-norm regularizers outperform TV in terms of noise suppression and contrast recovery

    Stochastic Optimisation Methods Applied to PET Image Reconstruction

    Get PDF
    Positron Emission Tomography (PET) is a medical imaging technique that is used to pro- vide functional information regarding physiological processes. Statistical PET reconstruc- tion attempts to estimate the distribution of radiotracer in the body but this methodology is generally computationally demanding because of the use of iterative algorithms. These algorithms are often accelerated by the utilisation of data subsets, which may result in con- vergence to a limit set rather than the unique solution. Methods exist to relax the update step sizes of subset algorithms but they introduce additional heuristic parameters that may result in extended reconstruction times. This work investigates novel methods to modify subset algorithms to converge to the unique solution while maintaining the acceleration benefits of subset methods. This work begins with a study of an automatic method for increasing subset sizes, called AutoSubsets. This algorithm measures the divergence between two distinct data subset update directions and, if significant, the subset size is increased for future updates. The algorithm is evaluated using both projection and list mode data. The algorithm’s use of small initial subsets benefits early reconstruction but unfortunately, at later updates, the subsets size increases too early, which impedes convergence rates. The main part of this work investigates the application of stochastic variance reduction optimisation algorithms to PET image reconstruction. These algorithms reduce variance due to the use of subsets by incorporating previously computed subset gradients into the update direction. The algorithms are adapted for the application to PET reconstruction. This study evaluates the reconstruction performance of these algorithms when applied to various 3D non-TOF PET simulated, phantom and patient data sets. The impact of a number of algorithm parameters are explored, which includes: subset selection methodologies, the number of subsets, step size methodologies and preconditioners. The results indicate that these stochastic variance reduction algorithms demonstrate superior performance after only a few epochs when compared to a standard PET reconstruction algorithm

    Accélération d'une approche régularisée de reconstruction en tomographie à rayons X avec réduction des artéfacts métalliques

    Get PDF
    Résumé Ce travail porte sur l'imagerie par tomographie à rayons X des vaisseaux périphériques traités par angioplastie avec implantation d'un tuteur endovasculaire métallique. On cherche à détecter le développement de la resténose en mesurant la lumière du vaisseau sanguin imagé. Cette application nécessite la reconstruction d'images de haute résolution. De plus, la présence du tuteur métallique cause l'apparition d'artéfacts qui nuisent à la précision de la mesure dans les images reconstruites dans les appareils tomographiques utilisés en milieu clinique. On propose donc de réaliser la reconstruction à l'aide d'un algorithme axé sur la maximisation pénalisée de la log-vraisemblance conditionnelle de l'image. Cet algorithme est déduit d'un modèle de formation des données qui tient compte de la variation non linéaire de l'atténuation des photons X dans l'objet selon leur énergie, ainsi que du caractère polychromatique du faisceau X. L'algorithme réduit donc effectivement les artéfacts causés spécifiquement par le tuteur métallique. De plus, il peut être configuré de manière à obtenir un compromis satisfaisant entre la résolution de l'image et la variance de l'image reconstruite, selon le niveau de bruit des données. Cette méthode de reconstruction est reconnue pour donner des images d'excellente qualité. Toutefois, le temps de calcul nécessaire à la convergence de cet algorithme est excessivement long. Le but de ce travail est donc de réduire le temps de calcul de cet algorithme de reconstruction itératif. Cette réduction passe par la critique de la formulation du problème et de la méthode de reconstruction, ainsi que par la mise en oeuvre d'approches alternatives.---------- Abstract This thesis is concerned with X-ray tomography of peripheral vessels that have undergone angioplasty with implantation of an endovascular metal stent. We seek to detect the onset of restenosis by measuring the lumen of the imaged blood vessel. This application requires the reconstruction of high-resolution images. In addition, the presence of a metal stent causes streak artifacts that complicate the lumen measurements in images obtained with the usual algorithms, like those implemented in clinical scanners. A regularized statistical reconstruction algorithm, hinged on the maximization of the conditional log-likelihood of the image, is preferable in this case. We choose a variant deduced from a data formation model that takes into account the nonlinear variation of X~photon attenuation to photon energy, as well as the polychromatic character of the X-ray beam. This algorithm effectively reduces the artifacts specifically caused by the metal structures. Moreover, the algorithm may be set to determine a good compromise between image resolution and variance, according to data noise. This reconstruction method is thus known to yield images of excellent quality. However, the runtime to convergence is excessively long. The goal of this work is to reduce the reconstruction runtime

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-Ring PET Insert System

    Get PDF
    X-ray computed tomography: CT) and positron emission tomography: PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets

    Compressed Sensing for Few-View Multi-Pinhole Spect with Applications to Preclinical Imaging

    Get PDF
    Single Photon Emission Computed Tomography (SPECT) can be used to identify and quantify changes in molecular and cellular targets involved in disease. A radiopharmaceutical that targets a specific metabolic function is administered to a subject and planar projections are formed by imaging emissions at different view angles around the subject. The reconstruction task is to determine the distribution of radioactivity within the subject from the projections. We present a reconstruction approach that utilizes only a few view angles, resulting in a highly underdetermined system, which could be used for dynamic imaging applications designed to quantify physiologic processes altered with disease. We developed an approach to solving the underdetermined problem that incorporates a fast matrix- based multi-pinhole projection model into a primal-dual algorithm (Chambolle-Pock), tailored to perform penalized data fidelity minimization using the reconstruction’s total variation as a sparse regularizer. The resulting algorithm was implemented on a Graphics Processing Unit (GPU), and validated by solving an idealized quadratic problem. Simulated noisy data from a digital rat thorax phantom was reconstructed using a range of regularizing parameters and primal-dual scale factors to control the smoothness of the reconstruction and the step-size in the iterative algorithm, respectively. The approach was characterized by evaluating data fidelity, convergence, and noise properties. The proposed approach was then applied to few-view experimental data obtained in a preclinical SPECT study. 99mTc-labeled macroaggregated albumin (MAA) that accumulates in the lung was administered to a rat and multi-pinhole image data was acquired and reconstructed. The results demonstrate MAA uptake in the lungs is accurately quantified over a wide range of activity levels using as few as three view angles. In a pilot experiment, we also showed using 15 and 60 view angles that uptake of 99mTc-hexamethylpropyleneamineoxime in hyperoxia-exposed rats is higher than that in control rats, consistent with previous studies in our laboratory. Overall these experiments demonstrate the potential utility of the proposed method for few-view three-dimensional reconstruction of SPECT data for dynamic preclinical studies

    REGULARIZATION PARAMETER SELECTION METHODS FOR ILL POSED POISSON IMAGING PROBLEMS

    Get PDF
    A common problem in imaging science is to estimate some underlying true image given noisy measurements of image intensity. When image intensity is measured by the counting of incident photons emitted by the object of interest, the data-noise is accurately modeled by a Poisson distribution, which motivates the use of Poisson maximum likelihood estimation. When the underlying model equation is ill-posed, regularization must be employed. I will present a computational framework for solving such problems, including statistically motivated methods for choosing the regularization parameter. Numerical examples will be included

    PET Reconstruction With an Anatomical MRI Prior Using Parallel Level Sets.

    Get PDF
    The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) offers unique possibilities. In this paper we aim to exploit the high spatial resolution of MRI to enhance the reconstruction of simultaneously acquired PET data. We propose a new prior to incorporate structural side information into a maximum a posteriori reconstruction. The new prior combines the strengths of previously proposed priors for the same problem: it is very efficient in guiding the reconstruction at edges available from the side information and it reduces locally to edge-preserving total variation in the degenerate case when no structural information is available. In addition, this prior is segmentation-free, convex and no a priori assumptions are made on the correlation of edge directions of the PET and MRI images. We present results for a simulated brain phantom and for real data acquired by the Siemens Biograph mMR for a hardware phantom and a clinical scan. The results from simulations show that the new prior has a better trade-off between enhancing common anatomical boundaries and preserving unique features than several other priors. Moreover, it has a better mean absolute bias-to-mean standard deviation trade-off and yields reconstructions with superior relative l2-error and structural similarity index. These findings are underpinned by the real data results from a hardware phantom and a clinical patient confirming that the new prior is capable of promoting well-defined anatomical boundaries.This research was funded by the EPSRC (EP/K005278/1) and EP/H046410/1 and supported by the National Institute for Health Research University College London Hospitals Biomedical Research Centre. M.J.E was supported by an IMPACT studentship funded jointly by Siemens and the UCL Faculty of Engineering Sciences. K.T. and D.A. are partially supported by the EPSRC grant EP/M022587/1.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TMI.2016.254960
    corecore