49 research outputs found
Generative Modeling in Sinogram Domain for Sparse-view CT Reconstruction
The radiation dose in computed tomography (CT) examinations is harmful for
patients but can be significantly reduced by intuitively decreasing the number
of projection views. Reducing projection views usually leads to severe aliasing
artifacts in reconstructed images. Previous deep learning (DL) techniques with
sparse-view data require sparse-view/full-view CT image pairs to train the
network with supervised manners. When the number of projection view changes,
the DL network should be retrained with updated sparse-view/full-view CT image
pairs. To relieve this limitation, we present a fully unsupervised score-based
generative model in sinogram domain for sparse-view CT reconstruction.
Specifically, we first train a score-based generative model on full-view
sinogram data and use multi-channel strategy to form highdimensional tensor as
the network input to capture their prior distribution. Then, at the inference
stage, the stochastic differential equation (SDE) solver and data-consistency
step were performed iteratively to achieve fullview projection. Filtered
back-projection (FBP) algorithm was used to achieve the final image
reconstruction. Qualitative and quantitative studies were implemented to
evaluate the presented method with several CT data. Experimental results
demonstrated that our method achieved comparable or better performance than the
supervised learning counterparts.Comment: 11 pages, 12 figure
Dynamic positron emission tomography image restoration via a kinetics-induced bilateral filter.
Dynamic positron emission tomography (PET) imaging is a powerful tool that provides useful quantitative information on physiological and biochemical processes. However, low signal-to-noise ratio in short dynamic frames makes accurate kinetic parameter estimation from noisy voxel-wise time activity curves (TAC) a challenging task. To address this problem, several spatial filters have been investigated to reduce the noise of each frame with noticeable gains. These filters include the Gaussian filter, bilateral filter, and wavelet-based filter. These filters usually consider only the local properties of each frame without exploring potential kinetic information from entire frames. Thus, in this work, to improve PET parametric imaging accuracy, we present a kinetics-induced bilateral filter (KIBF) to reduce the noise of dynamic image frames by incorporating the similarity between the voxel-wise TACs using the framework of bilateral filter. The aim of the proposed KIBF algorithm is to reduce the noise in homogeneous areas while preserving the distinct kinetics of regions of interest. Experimental results on digital brain phantom and in vivo rat study with typical (18)F-FDG kinetics have shown that the present KIBF algorithm can achieve notable gains over other existing algorithms in terms of quantitative accuracy measures and visual inspection
Two MSE plots or parameter selections of the neighbor window (A) and the controlling parameters (B) of KIBF algorithm.
<p>Two MSE plots or parameter selections of the neighbor window (A) and the controlling parameters (B) of KIBF algorithm.</p
MSE plots for parameter selections of the standard deviation for GF algorithm (A) and the controlling parameters of BF algorithm (B).
<p>MSE plots for parameter selections of the standard deviation for GF algorithm (A) and the controlling parameters of BF algorithm (B).</p
The parametric images estimated by different algorithms.
<p>(A) is the result from the direct OSEM reconstruction; (B) is the result from the OSEM image filtered by the GF algorithm ( voxel); (C) is the result from the OSEM image filtered by the BF algorithm ( voxel, ); and (D) the result is from the OSEM image filtered by the KIBF algorithm ( voxel, ). All images are with a same display window.</p
The ground truth and the activity images reconstructed by different algorithms at frames #6, #16, and #26.
<p>(A) are the ground truth; (B) are the results from the direct FBP reconstruction; (C) are the results from the FBP images filtered by the GF algorithm ( voxel); (D) are the results from the FBP images filtered by the BF algorithm ( voxel, ); and (E) are the results from the FBP images filtered by the present KIBF algorithm ( voxel, ). All images are with a same display window.</p
Box plots of the mean value of with standard deviations in the gray matter, white matter and small tumor regions from different algorithms.
<p>Box plots of the mean value of with standard deviations in the gray matter, white matter and small tumor regions from different algorithms.</p