19 research outputs found

    Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-Ring PET Insert System

    Get PDF
    X-ray computed tomography: CT) and positron emission tomography: PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets

    Incorporating accurate statistical modeling in PET: reconstruction for whole-body imaging

    Get PDF
    Tese de doutoramento em Biofísica, apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2007The thesis is devoted to image reconstruction in 3D whole-body PET imaging. OSEM ( Ordered Subsets Expectation maximization ) is a statistical algorithm that assumes Poisson data. However, corrections for physical effects (attenuation, scattered and random coincidences) and detector efficiency remove the Poisson characteristics of these data. The Fourier Rebinning (FORE), that combines 3D imaging with fast 2D reconstructions, requires corrected data. Thus, if it will be used or whenever data are corrected prior to OSEM, the need to restore the Poisson-like characteristics is present. Restoring Poisson-like data, i.e., making the variance equal to the mean, was achieved through the use of weighted OSEM algorithms. One of them is the NECOSEM, relying on the NEC weighting transformation. The distinctive feature of this algorithm is the NEC multiplicative factor, defined as the ratio between the mean and the variance. With real clinical data this is critical, since there is only one value collected for each bin the data value itself. For simulated data, if we keep track of the values for these two statistical moments, the exact values for the NEC weights can be calculated. We have compared the performance of five different weighted algorithms (FORE+AWOSEM, FORE+NECOSEM, ANWOSEM3D, SPOSEM3D and NECOSEM3D) on the basis of tumor detectablity. The comparison was done for simulated and clinical data. In the former case an analytical simulator was used. This is the ideal situation, since all the weighting factors can be exactly determined. For comparing the performance of the algorithms, we used the Non-Prewhitening Matched Filter (NPWMF) numerical observer. With some knowledge obtained from the simulation study we proceeded to the reconstruction of clinical data. In that case, it was necessary to devise a strategy for estimating the NEC weighting factors. The comparison between reconstructed images was done by a physician largely familiar with whole-body PET imaging

    Geometric reconstruction methods for electron tomography

    Get PDF
    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and nonlinear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180180^\circ tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Optimization of novel developments in Positron Emission Tomography (PET) imaging

    Get PDF
    Positron Emission Tomography (PET) is a widely used imaging modality for diagnosing patients with cancer. Recently, there have been three novel developments in PET imaging aiming to increase PET image quality and quantification. This thesis focuses on the optimization of PET image quality on these three developments. The first development is the fully 3D PET data acquisition and reconstruction. 3D Acquisitions are not constrained in collecting events in single 2D planes and can span across different planes. 3D acquisition provides better detection since it can accept more events. Also it can result in lower radiation dose to the patient and shorter imaging times. With the application of 3D acquisition, a fully 3D iterative reconstruction algorithm was also developed. The aim of the first project in this thesis is to evaluate the PET image and raw data quality when this fully 3D iterative reconstruction algorithm is applied. The second development in PET imaging is the time-of-flight (TOF) PET data acquisition and reconstruction. TOF imaging has the ability to measure the difference between the detection times, thus localize the event location more accurately to increase the image quality. The second project in this thesis focuses on optimizing the TOF reconstruction parameters on a newly developed TOF PET scanner. Then the improvement of TOF information on image quality is assessed using the derived optimal parameters. Finally the effect of scan duration is evaluated to determine whether similar image quality could be obtained between TOF and non-TOF while using less scan time for TOF. The third development is the interest in building PET / magnetic resonance (MR) multi-modality scanner. MR imaging has the ability to show high soft tissue contrast and can assess physiological processes, which cannot be achieved on PET images. One problem in developing PET/MR system is that it is not possible with current MR acquisition schemes to translate the MR image into an attenuation map to correct for PET attenuations. The third project in this thesis proposed and assessed an approach for the attenuation correction of PET data in potential PET/MR systems to improve PET image quality and quantification

    Respiratory motion correction techniques in positron emission tomography/computed tomography (PET/CT) imaging

    Get PDF
    The aim of this thesis is to design, implement, and evaluate respiratory motion correction techniques that can overcome respiratory motion artifacts in PET/CT imaging. The thesis is composed of three main sections. The first section introduces a novel approach (free-breathing amplitude gating (FBAG) technique) to correct for respiratory motion artifacts. This approach is based on sorting the acquired PET data in multiple amplitude bins which is currently not possible on any commercial PET/CT scanner. The second section is focused on the hardware/software design of an in-house respiratory gating device that is necessary to facilitate the implementation of the FBAG technique. Currently there are no commercially available respiratory gating systems that can generate the necessary triggers required for the FBAG technique. The third section is focused on developing a joint correction technique that can simultaneously suppress respiratory motion artifacts as well as partial volume effects (PVE) which represent another source of image degradation in PET/CT imaging. Computer simulations, phantom studies, as well as patient studies are conducted to test the performance of these proposed techniques and their results are shown in this thesis

    Data-driven methods for respiratory signal detection in positron emission tomography

    Get PDF
    Positron Emission Tomography (PET) is a nuclear medicine imaging technique which allows quantitative assessment of functional processes, by determining the distribution of radioactive tracers inside the patient body. It is mainly used in oncology. Respiration during PET data acquisition of the chest leads to blurring and other artefacts in the images, lowering their quantitative accuracy. If a respiratory signal is available, these issues can be overcome by splitting the data into different motion states. In current clinical practice this signal is obtained using external devices. However, these are expensive, require prior setup and can cause patient discomfort. This thesis develops and evaluates Data-Driven (DD) techniques based on Principal Component Analysis (PCA) to generate the signal directly from the PET data. Firstly, the arbitrary relation between the sign of the PCA signal and the respiratory motion is addressed: a maximum in the signal could refer either to end-inspiration or end-expiration, possibly causing inaccurate motion correction. A new correction method is proposed and compared with two already existing methods. Subsequently, the methods are extended to Time-of-Flight (TOF) PET data, proposing a data processing step prior to using PCA, in order to benefit from the increased spatial information provided by TOF. The proposed methods are then extensively tested on lower lung patient data (non-TOF and TOF). The obtained respiratory signal is compared with that of an external device and with internal motion observed with Magnetic Resonance Imaging (MRI). Lastly, to investigate the performance of PCA where respiratory motion is minimal, the methods are applied to patient and simulation data of the upper lung, showing that they could potentially be utilised for detecting respiratory-induced density variations in the upper lung. This study shows that the presented methods could replace external devices for obtaining a respiratory signal, providing a simple and cost-effective tool for motion management in PET

    Modeling and Development of Iterative Reconstruction Algorithms in Emerging X-ray Imaging Technologies

    Get PDF
    Many new promising X-ray-based biomedical imaging technologies have emerged over the last two decades. Five different novel X-ray based imaging technologies are discussed in this dissertation: differential phase-contrast tomography (DPCT), grating-based phase-contrast tomography (GB-PCT), spectral-CT (K-edge imaging), cone-beam computed tomography (CBCT), and in-line X-ray phase contrast (XPC) tomosynthesis. For each imaging modality, one or more specific problems prevent them being effectively or efficiently employed in clinical applications have been discussed. Firstly, to mitigate the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods in DPCT, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction. Secondly, to improve image quality in grating-based phase-contrast tomography, we incorporate 2nd order statistical properties of the object property sinograms, including correlations between them, into the formulation of an advanced multi-channel (MC) image reconstruction algorithm, which reconstructs three object properties simultaneously. We developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method to rapidly solve the MC reconstruction problem. Thirdly, to mitigate image artifacts that arise from reduced-view and/or noisy decomposed sinogram data in K-edge imaging, we exploited the inherent sparseness of typical K-edge objects and incorporated the statistical properties of the decomposed sinograms to formulate two penalized weighted least square problems with a total variation (TV) penalty and a weighted sum of a TV penalty and an l1-norm penalty with a wavelet sparsifying transform. We employed a fast iterative shrinkage/thresholding algorithm (FISTA) and splitting-based FISTA algorithm to solve these two PWLS problems. Fourthly, to enable advanced iterative algorithms to obtain better diagnostic images and accurate patient positioning information in image-guided radiation therapy for CBCT in a few minutes, two accelerated variants of the FISTA for PLS-based image reconstruction are proposed. The algorithm acceleration is obtained by replacing the original gradient-descent step by a sub-problem that is solved by use of the ordered subset concept (OS-SART). In addition, we also present efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units (GPUs). Finally, we employed our developed accelerated version of FISTA for dealing with the incomplete (and often noisy) data inherent to in-line XPC tomosynthesis which combines the concepts of tomosynthesis and in-line XPC imaging to utilize the advantages of both for biological imaging applications. We also investigate the depth resolution properties of XPC tomosynthesis and demonstrate that the z-resolution properties of XPC tomosynthesis is superior to that of conventional absorption-based tomosynthesis. To investigate all these proposed novel strategies and new algorithms in these different imaging modalities, we conducted computer simulation studies and real experimental data studies. The proposed reconstruction methods will facilitate the clinical or preclinical translation of these emerging imaging methods
    corecore