21 research outputs found

    Tomographic Image Reconstruction Based on Minimization of Symmetrized Kullback-Leibler Divergence

    Get PDF
    Iterative reconstruction (IR) algorithms based on the principle of optimization are known for producing better reconstructed images in computed tomography. In this paper, we present an IR algorithm based on minimizing a symmetrized Kullback-Leibler divergence (SKLD) that is called Jeffreys’ J-divergence. The SKLD with iterative steps is guaranteed to decrease in convergence monotonically using a continuous dynamical method for consistent inverse problems. Specifically, we construct an autonomous differential equation for which the proposed iterative formula gives a first-order numerical discretization and demonstrate the stability of a desired solution using Lyapunov’s theorem. We describe a hybrid Euler method combined with additive and multiplicative calculus for constructing an effective and robust discretization method, thereby enabling us to obtain an approximate solution to the differential equation.We performed experiments and found that the IR algorithm derived from the hybrid discretization achieved high performance

    Block-Iterative Reconstruction from Dynamically Selected Sparse Projection Views Using Extended Power-Divergence Measure

    Get PDF
    Iterative reconstruction of density pixel images from measured projections in computed tomography has attracted considerable attention. The ordered-subsets algorithm is an acceleration scheme that uses subsets of projections in a previously decided order. Several methods have been proposed to improve the convergence rate by permuting the order of the projections. However, they do not incorporate object information, such as shape, into the selection process. We propose a block-iterative reconstruction from sparse projection views with the dynamic selection of subsets based on an estimating function constructed by an extended power-divergence measure for decreasing the objective function as much as possible. We give a unified proposition for the inequality related to the difference between objective functions caused by one iteration as the theoretical basis of the proposed optimization strategy. Through the theory and numerical experiments, we show that nonuniform and sparse use of projection views leads to a reconstruction of higher-quality images and that an ordered subset is not the most effective for block-iterative reconstruction. The two-parameter class of extended power-divergence measures is the key to estimating an effective decrease in the objective function and plays a significant role in constructing a robust algorithm against noise

    Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules

    Get PDF
    Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT) theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI) from truncated data in a theoretically exact fashion via the total variation (TV) minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP) reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen

    Noise-Robust Image Reconstruction Based on Minimizing Extended Class of Power-Divergence Measures

    Get PDF
    The problem of tomographic image reconstruction can be reduced to an optimization problem of finding unknown pixel values subject to minimizing the difference between the measured and forward projections. Iterative image reconstruction algorithms provide significant improvements over transform methods in computed tomography. In this paper, we present an extended class of power-divergence measures (PDMs), which includes a large set of distance and relative entropy measures, and propose an iterative reconstruction algorithm based on the extended PDM (EPDM) as an objective function for the optimization strategy. For this purpose, we introduce a system of nonlinear differential equations whose Lyapunov function is equivalent to the EPDM. Then, we derive an iterative formula by multiplicative discretization of the continuous-time system. Since the parameterized EPDM family includes the Kullback–Leibler divergence, the resulting iterative algorithm is a natural extension of the maximum-likelihood expectation-maximization (MLEM) method. We conducted image reconstruction experiments using noisy projection data and found that the proposed algorithm outperformed MLEM and could reconstruct high-quality images that were robust to measured noise by properly selecting parameters

    Study of a convergent subsetized list-mode EM reconstruction algorithm

    Get PDF
    Abstract-We have implemented a convergent subsetized (CS) list-mode reconstruction algorithm, based on previous work [1]- [3] on complete-data OS-EM reconstruction. The first step of the convergent algorithm is exactly equivalent (unlike the histogrammode case) to the regular subsetized list-mode EM algorithm, while the second and final step takes the form of additive updates in image space. A hybrid algorithm based on the ordinary and the convergent algorithms is also proposed, and is shown to combine the advantages of the two algorithms: it is able to reach a higher image quality in fewer iterations while maintaining the convergent behavior, making the hybrid approach a good alternative to the ordinary subsetized list-mode EM algorithm. Reconstructions using various LOR-driven projection techniques (Siddon method, trilinear and bilinear interpolation) were considered and it was demonstrated that in terms of FWHM, the Siddon technique is inferior to the other two algorithms, with the bilinear interpolation technique performing nearly similarly as the trilinear while being considerably faster

    Theoretical and numerical study of MLEM and OSEM reconstruction algorithms for motion correction in emission tomography

    Get PDF
    Patient body-motion and respiratory-motion impacts the image quality of cardiac SPECT and PET perfusion images. Several algorithms exist in the literature to correct for motion within the iterative maximum-likelihood reconstruction framework. In this work, three algorithms are derived starting with Poisson statistics to correct for patient motion. The first one is a motion compensated MLEM algorithm (MC-MLEM). The next two algorithms called MGEM-1 and MGEM-2 (short for Motion Gated OSEM, 1 and 2) use the motion states as subsets, in two different ways. Experiments were performed with NCAT phantoms (with exactly known motion) as the source and attenuation distributions. Experiments were also performed on an anthropomorphic phantom and a patient study. The SIMIND Monte Carlo simulation software was used to create SPECT projection images of the NCAT phantoms. The projection images were then modified to have Poisson noise levels equivalent to that of clinical acquisition. We investigated application of these algorithms to correction of (1) a large body-motion of 2 cm in Superior-Inferior (SI) and Anterior-Posterior (AP) directions each and (2) respiratory motion of 2 cm in SI and 0.6 cm in AP. We determined the bias with respect to the NCAT phantom activity for noiseless reconstructions as well as the bias-variance for noisy reconstructions. The MGEM-1 advanced along the bias-variance curve faster than the MC-MLEM with iterations. The MGEM-1 also lowered the noiseless bias (with respect to NCAT truth) faster with iterations, compared to the MC-MLEM algorithms, as expected with subset algorithms. For the body motion correction with two motion states, after the 9th iteration the bias was close to that of MC-MLEM at iteration 17, reducing the number of iterations by a factor of 1.89. For the respiratory motion correction with 9 motion states, based on the noiseless bias, the iteration reduction factor was approximately 7. For the MGEM-2, however, bias-plot or the bias-variance-plot saturated with iteration because of successive interpolation error. SPECT data was acquired simulating respiratory motion of 2 cm amplitude with an anthropomorphic phantom. A patient study acquired with body motion in a second rest was also acquired. The motion correction was applied to these acquisitions with the anthropomorphic phantom and the patient study, showing marked improvements of image quality with the estimated motion correction. Š 2009 IEEE

    LROC Investigation of Three Strategies for Reducing the Impact of Respiratory Motion on the Detection of Solitary Pulmonary Nodules in SPECT

    Get PDF
    The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below (P \u3c 0.05) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction

    Convergent Incremental Optimization Transfer Algorithms: Application to Tomography

    Full text link
    No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters , and methods based on the incremental expectation-maximization (EM) approach . This paper generalizes the incremental EM approach by introducing a general framework, "incremental optimization transfer". The proposed algorithms accelerate convergence speeds and ensure global convergence without requiring relaxation parameters. The general optimization transfer framework allows the use of a very broad family of surrogate functions, enabling the development of new algorithms . This paper provides the first convergent OS-type algorithm for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS) which yield closed-form maximization steps. We found it is very effective to achieve fast convergence rates by starting with an OS algorithm with a large number of subsets and switching to the new "transmission incremental optimization transfer (TRIOT)" algorithm. Results show that TRIOT is faster in increasing the PL objective than nonincremental ordinary SPS and even OS-SPS yet is convergent.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85980/1/Fessler46.pd
    corecore