3,251 research outputs found

    Direct estimation of kinetic parametric images for dynamic PET.

    Get PDF
    Dynamic positron emission tomography (PET) can monitor spatiotemporal distribution of radiotracer in vivo. The spatiotemporal information can be used to estimate parametric images of radiotracer kinetics that are of physiological and biochemical interests. Direct estimation of parametric images from raw projection data allows accurate noise modeling and has been shown to offer better image quality than conventional indirect methods, which reconstruct a sequence of PET images first and then perform tracer kinetic modeling pixel-by-pixel. Direct reconstruction of parametric images has gained increasing interests with the advances in computing hardware. Many direct reconstruction algorithms have been developed for different kinetic models. In this paper we review the recent progress in the development of direct reconstruction algorithms for parametric image estimation. Algorithms for linear and nonlinear kinetic models are described and their properties are discussed

    Covariance of Kinetic Parameter Estimators Based on Time Activity Curve Reconstructions: Preliminary Study on 1D Dynamic Imaging

    Full text link
    We provide approximate expressions for the covariance matrix of kinetic parameter estimators based on time activity curve (TAC) reconstructions when TACs are modeled as a linear combination of temporal basis functions such as B-splines. The approximations are useful tools for assessing and optimizing the basis functions for TACs and the temporal bins for data in terms of computation and efficiency. In this paper we analyze a 1D temporal problem for simplicity, and we consider a scenario where TACs are reconstructed by penalized-likelihood (PL) estimation incorporating temporal regularization, and kinetic parameters are obtained by maximum likelihood (ML) estimation. We derive approximate formulas for the covariance of the kinetic parameter estimators using 1) the mean and variance approximations for PL estimators in (Fessler, 1996) and 2) Cramer-Rao bounds. The approximations apply to list-mode data as well as bin-mode data.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85981/1/Fessler193.pd

    Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging

    Get PDF
    The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch

    Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging

    Get PDF
    The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch

    Dynamic PET image reconstruction utilizing intrinsic data-driven HYPR4D denoising kernel

    Get PDF
    Purpose: Reconstructed PET images are typically noisy, especially in dynamic imaging where the acquired data are divided into several short temporal frames. High noise in the reconstructed images translates to poor precision/reproducibility of image features. One important role of “denoising” is therefore to improve the precision of image features. However, typical denoising methods achieve noise reduction at the expense of accuracy. In this work, we present a novel four-dimensional (4D) denoised image reconstruction framework, which we validate using 4D simulations, experimental phantom, and clinical patient data, to achieve 4D noise reduction while preserving spatiotemporal patterns/minimizing error introduced by denoising. Methods: Our proposed 4D denoising operator/kernel is based on HighlY constrained backPRojection (HYPR), which is applied either after each update of OSEM reconstruction of dynamic 4D PET data or within the recently proposed kernelized reconstruction framework inspired by kernel methods in machine learning. Our HYPR4D kernel makes use of the spatiotemporal high frequency features extracted from a 4D composite, generated within the reconstruction, to preserve the spatiotemporal patterns and constrain the 4D noise increment of the image estimate. Results: Results from simulations, experimental phantom, and patient data showed that the HYPR4D kernel with our proposed 4D composite outperformed other denoising methods, such as the standard OSEM with spatial filter, OSEM with 4D filter, and HYPR kernel method with the conventional 3D composite in conjunction with recently proposed High Temporal Resolution kernel (HYPRC3D-HTR), in terms of 4D noise reduction while preserving the spatiotemporal patterns or 4D resolution within the 4D image estimate. Consequently, the error in outcome measures obtained from the HYPR4D method was less dependent on the region size, contrast, and uniformity/functional patterns within the target structures compared to the other methods. For outcome measures that depend on spatiotemporal tracer uptake patterns such as the nondisplaceable Binding Potential (BPND), the root mean squared error in regional mean of voxel BPND values was reduced from ~8% (OSEM with spatial or 4D filter) to ~3% using HYPRC3D-HTR and was further reduced to ~2% using our proposed HYPR4D method for relatively small target structures (~10 mm in diameter). At the voxel level, HYPR4D produced two to four times lower mean absolute error in BPND relative to HYPRC3D-HTR. Conclusion: As compared to conventional methods, our proposed HYPR4D method can produce more robust and accurate image features without requiring any prior information

    Joint Image Reconstruction and Motion Estimation for Spatiotemporal Imaging

    Get PDF
    International audienceWe propose a variational model for joint image reconstruction and motion estimation applicable to spatiotemporal imaging. This model consists of two parts, one that conducts image reconstruction in a static setting and another that estimates the motion by solving a sequence of coupled indirect image registration problems, each formulated within the large deformation diffeomorphic metric mapping framework. The proposed model is compared against alternative approaches (optical flow based model and diffeomorphic motion models). Next, we derive efficient algorithms for a time-discretized setting and show that the optimal solution of the time-discretized formulation is consistent with that of the time-continuous one. The complexity of the algorithm is characterized and we conclude by giving some numerical examples in 2D space + time tomography with very sparse and/or highly noisy dat
    corecore