44 research outputs found

    Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction

    Full text link
    Training end-to-end unrolled iterative neural networks for SPECT image reconstruction requires a memory-efficient forward-backward projector for efficient backpropagation. This paper describes an open-source, high performance Julia implementation of a SPECT forward-backward projector that supports memory-efficient backpropagation with an exact adjoint. Our Julia projector uses only ~5% of the memory of an existing Matlab-based projector. We compare unrolling a CNN-regularized expectation-maximization (EM) algorithm with end-to-end training using our Julia projector with other training methods such as gradient truncation (ignoring gradients involving the projector) and sequential training, using XCAT phantoms and virtual patient (VP) phantoms generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two different radionuclides (90Y and 177Lu) show that: 1) For 177Lu XCAT phantoms and 90Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with our Julia projector yields the best reconstruction quality compared to other training methods and OSEM, both qualitatively and quantitatively. For VP phantoms with 177Lu radionuclide, the reconstructed images using end-to-end training are in higher quality than using sequential training and OSEM, but are comparable with using gradient truncation. We also find there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training has the highest accuracy because the correct gradient is used in backpropagation; sequential training yields worse reconstruction accuracy, but is significantly faster and uses much less memory.Comment: submitted to IEEE TRPM

    Free Software for PET Imaging

    Get PDF

    3-D Monte Carlo-Based Scatter Compensation in Quantitative I-131 SPECT Reconstruction

    Full text link
    We have implemented highly accurate Monte Carlo based scatter modeling (MCS) with 3-D ordered subsets expectation maximization (OSEM) reconstruction for I-131 single photon emission computed tomography (SPECT). The scatter is included in the statistical model as an additive term and attenuation and detector response are included in the forward/backprojector. In the present implementation of MCS, a simple multiple window-based estimate is used for the initial iterations and in the later iterations the Monte Carlo estimate is used for several iterations before it is updated. For I-131, MCS was evaluated and compared with triple energy window (TEW) scatter compensation using simulation studies of a mathematical phantom and a clinically realistic voxel-phantom. Even after just two Monte Carlo updates, excellent agreement was found between the MCS estimate and the true scatter distribution. Accuracy and noise of the reconstructed images were superior with MCS compared to TEW. However, the improvement was not large, and in some cases may not justify the large computational requirements of MCS. Furthermore, it was shown that the TEW correction could be improved for most of the targets investigated here by applying a suitably chosen scaling factor to the scatter estimate. Finally clinical application of MCS was demonstrated by applying the method to an I-131 radioimmunotherapy (RIT) patient study.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85854/1/Fessler47.pd

    Itera- tive Reconstruction Framework for High-Resolution X-ray CT Data

    Get PDF
    Small animal medical imaging has become an important tool for researchers as it allows noninvasively screening animal models for pathologies as well as monitoring dis- ease progression and therapy response. Currently, clinical CT scanners typically use a Filtered Backprojection (FBP) based method for image reconstruction. This algorithm is fast and generally produces acceptable results, but has several drawbacks. Firstly, it is based upon line integrals, which do not accurately describe the process of X-ray attenuation. Secondly, noise in the projection data is not properly modeled with FBP. On the other hand, iterative algorithms allow the integration of more complicated sys- tem models as well as robust scatter and noise correction techniques. Unfortunately, the iterative algorithms also have much greater computational demands than their FBP counterparts. In this thesis, we develop a framework to support iterative reconstruc- tions of high-resolution X-ray CT data. This includes exploring various system models and algorithms as well as developing techniques to manage the significant computa- tional and system storage requirements of the iterative algorithms. Issues related to the development of this framework as well as preliminary results are presented

    Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography

    Get PDF
    Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation. In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm. Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm. The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose

    Optimization of the Parameters of the YAP-(S)PETII Scanner for SPECT Acquisition

    Get PDF
    Abstract Single Photon Emission Computed Tomography (SPECT) could be considered as a milestone in terms of biomedical imaging technique, which visualizes Functional processes in-vivo, based on the emission of gamma rays produced within the body. The most distinctive feature of SPECT from other imaging modalities is that it is based on the tracer principle, discovered by George Charles de Hevesy in the first decade of the twentieth century. As known by everyone, the metabolism of an organism is composed of atoms within a molecule which can be replaced by one of its radioactive isotopes. By using this principle, we are able to follow and detect pathways of the photons which are emitted from the radioactive element inside the metabolism. SPECT produces images by using a gamma camera which consists of two major functional components, the collimator and the radiation detector. The collimator is a thick sheet of a heavy metal like lead, tungsten of gold with densely packed small holes and is put just in front of the photon detector. The radiation detector converts the gamma rays into scintillation light photons. In conventional SPECT, scanners utilize a parallel hole collimator. Defining a small solid angle, each collimator hole is located somewhere along this line and the photons might reach the detector by passing through these holes. Subsequently, we can create projection images of the radioisotope distribution. The quantity of photons which come to the radiation detector through the collimator holes specifies the image quality regarding signal to noise ratio. One of the crucial parts of all SPECT scanners is the collimator design. The main part of this dissertation is to investigate performance characteristics of YAP-(S)PETII scanner collimator and to obtain collimator characteristics curves for optimization purposes. Before starting the collimator performance investigation of YAP-(S)PETII scanner, we first performed simulation of it in SPECT mode with point source Tc-99m to measure collimator and system efficiency by using GATE–the Geant4 Application for Emission Tomography. GATE is an advanced, flexible, precise, opensource Monte Carlo toolkit developed by the international OpenGATE collaboration and dedicated to the numerical simulations in medical imaging. We obtained the results of collimator and system efficiency in terms of collimator length, radius and septa by using GATE_v4. Then, we compared our results with analytical formulation of efficiency and resolution. For those simulation experiments, we found that the difference between the simulated and the analytical results with regard to approximated geometrical collimator efficiency formulation of H. Anger, is within 20%. Then, we wrote a new ASCII sorter algorithm, which reads ASCII output of GATE_v4 and then creates a sinogram and reconstructs it to see the final simulation results. At the beginning, we used the analytical reconstruction method, filtered back projection (FBP), but this method produces severely blurred images. To solve this problem and increase our image quality, we tried different mathematical filters, like ramp, sheep-logan, low-pass cosine filters. After all of those studies mentioned above, we learned that GATE_v4 is not practical to measure collimator efficiency and resolution. On the other hand, the results of GATE_v4 did not show directly septal penetrated photon ratio. Under the light of these findings, we decided to develop a new user-friendly ray tracing program for optimization of low energy general purpose (LEGP) parallelhole collimators. In addition, we tried to evaluate the image quality and quantify the impact of high-energy contamination for I-123 isotope imaging. Due to its promising chemical characteristics, Iodine-123 is increasingly used in SPECT studies. 159 keV photons are used for imaging, however, high-energy photons result in an error in the projection data primarily by penetration of the collimator and scattering inside the crystal with energy close to the photons used for imaging. One of the way to minimize this effect is using a double energy window (DEW) method, because, it decreases noise in main (sensitive) energy window. By using this method, we tried to determine the difference between simulated and experimental projection results and scattered photon ratio (Sk) value of YAP-(S)PETII scanner for I-123 measurements. The main drawback of GATE simulations is that they are CPU-intensive. In this dissertation to handle this problem, we did the feasibility study of the Fully Monte Carlo based implementation of the system matrix derivation of YAP-(S)PETII scanner by using XtreemOS platform. To manage lifecycle of the simulation on the top XtreemOS, we developed a set of scripts. The main purpose of our study is to integrate a distributed platform like XtreemOS to reduce the overall simulation completion time and increase the feasibility of SPECT simulations in a research environment and establish an accurate and fast method for deriving the system matrix of the YAP-(S)PETII scanner by using Monte Carlo simulation approach. We developed also the ML-EM Algorithm to the reconstruct our GATE simulation results and to derive the system matrix directly from GATE output. In addition to the accuracy consideration, we intend to develop a flexible matrix derivation method and GATE output reconstruction tool

    SPECT imaging with rotating slat collimator

    Get PDF

    Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    Get PDF
    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements, and clinical studies. Accurate Monte Carlo modelling is still the gold standard since it allows to separate scattered and unscattered events and compare the estimated and true unscattered component. Results: In this study, our modified version of Monte Carlo-based scatter correction (MCBSC2) seems to provide a good contrast recovery on the simulated Utah phantom, while the DEW method was found to be clearly superior for the experimental phantom studies in terms of quantitative accuracy at the expense of a significant deterioration of the signal-to-noise ratio. On the other hand, the immunity to noise in emission data of statistical reconstruction-based scatter correction methods make them particularly applicable to low-count emission studies. All scatter correction methods give very good activity recovery values for the simulated 3D Hoffman brain phantom which average within 3%. The CVS and MCBSC1 techniques tend to overcorrect while SRBSC undercorrects for scatter in most regions of this phantom. Conclusion: It was concluded that all correction methods significantly improved the image quality and contrast compared to the case where no correction is applied. Generally, it was shown that the differences in the estimated scatter distributions did not have a significant impact on the final quantitative results. The DEW method showed the best compromise between ease of implementation and quantitative accuracy, but significantly deteriorates the signal-noise ratio
    corecore