196 research outputs found

    Penalized Weighted Least-Squares Image Reconstruction for Positron Emission Tomography

    Full text link
    Presents an image reconstruction method for positron-emission tomography (PET) based on a penalized, weighted least-squares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, the author argues statistically that a least-squares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. The author proposes a simple data-based method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive over-relaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximum-likelihood expectation-maximization (ML-EM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminated by the PWLS+SOR method, and indicate that the proposed method for weighting the measurements is a significant factor in the improvement over FBP.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85851/1/Fessler105.pd

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Development and implementation of efficient noise suppression methods for emission computed tomography

    Get PDF
    In PET and SPECT imaging, iterative reconstruction is now widely used due to its capability of incorporating into the reconstruction process a physics model and Bayesian statistics involved in photon detection. Iterative reconstruction methods rely on regularization terms to suppress image noise and render radiotracer distribution with good image quality. The choice of regularization method substantially affects the appearances of reconstructed images, and is thus a critical aspect of the reconstruction process. Major contributions of this work include implementation and evaluation of various new regularization methods. Previously, our group developed a preconditioned alternating projection algorithm (PAPA) to optimize the emission computed tomography (ECT) objective function with the non-differentiable total variation (TV) regularizer. The algorithm was modified to optimize the proposed reconstruction objective functions. First, two novel TV-based regularizers—high-order total variation (HOTV) and infimal convolution total variation (ICTV)—were proposed as alternative choices to the customary TV regularizer in SPECT reconstruction, to reduce “staircase” artifacts produced by TV. We have evaluated both proposed reconstruction methods (HOTV-PAPA and ICTV-PAPA), and compared them with the TV regularized reconstruction (TV-PAPA) and the clinical standard, Gaussian post-filtered, expectation-maximization reconstruction method (GPF-EM) using both Monte Carlo-simulated data and anonymized clinical data. Model-observer studies using Monte Carlo-simulated data indicate that ICTV-PAPA is able to reconstruct images with similar or better lesion detectability, compared with clinical standard GPF-EM methods, but at lower detected count levels. This implies that switching from GPF-EM to ICTV-PAPA can reduce patient dose while maintaining image quality for diagnostic use. Second, the 1 norm of discrete cosine transform (DCT)-induced framelet regularization was studied. We decomposed the image into high and low spatial-frequency components, and then preferentially penalized the high spatial-frequency components. The DCT-induced framelet transform of the natural radiotracer distribution image is sparse. By using this property, we were able to effectively suppress image noise without overly compromising spatial resolution or image contrast. Finally, the fractional norm of the first-order spatial gradient was introduced as a regularizer. We implemented 2/3 and 1/2 norms to suppress image spatial variability. Due to the strong penalty of small differences between neighboring pixels, fractional-norm regularizers suffer from similar cartoon-like artifacts as with the TV regularizer. However, when penalty weights are properly selected, fractional-norm regularizers outperform TV in terms of noise suppression and contrast recovery

    An invitation to quantum tomography (II)

    Get PDF
    The quantum state of a light beam can be represented as an infinite dimensional density matrix or equivalently as a density on the plane called the Wigner function. We describe quantum tomography as an inverse statistical problem in which the state is the unknown parameter and the data is given by results of measurements performed on identical quantum systems. We present consistency results for Pattern Function Projection Estimators as well as for Sieve Maximum Likelihood Estimators for both the density matrix of the quantum state and its Wigner function. Finally we illustrate via simulated data the performance of the estimators. An EM algorithm is proposed for practical implementation. There remain many open problems, e.g. rates of convergence, adaptation, studying other estimators, etc., and a main purpose of the paper is to bring these to the attention of the statistical community.Comment: An earlier version of this paper with more mathematical background but less applied statistical content can be found on arXiv as quant-ph/0303020. An electronic version of the paper with high resolution figures (postscript instead of bitmaps) is available from the authors. v2: added cross-validation results, reference

    PET Reconstruction With an Anatomical MRI Prior Using Parallel Level Sets.

    Get PDF
    The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) offers unique possibilities. In this paper we aim to exploit the high spatial resolution of MRI to enhance the reconstruction of simultaneously acquired PET data. We propose a new prior to incorporate structural side information into a maximum a posteriori reconstruction. The new prior combines the strengths of previously proposed priors for the same problem: it is very efficient in guiding the reconstruction at edges available from the side information and it reduces locally to edge-preserving total variation in the degenerate case when no structural information is available. In addition, this prior is segmentation-free, convex and no a priori assumptions are made on the correlation of edge directions of the PET and MRI images. We present results for a simulated brain phantom and for real data acquired by the Siemens Biograph mMR for a hardware phantom and a clinical scan. The results from simulations show that the new prior has a better trade-off between enhancing common anatomical boundaries and preserving unique features than several other priors. Moreover, it has a better mean absolute bias-to-mean standard deviation trade-off and yields reconstructions with superior relative l2-error and structural similarity index. These findings are underpinned by the real data results from a hardware phantom and a clinical patient confirming that the new prior is capable of promoting well-defined anatomical boundaries.This research was funded by the EPSRC (EP/K005278/1) and EP/H046410/1 and supported by the National Institute for Health Research University College London Hospitals Biomedical Research Centre. M.J.E was supported by an IMPACT studentship funded jointly by Siemens and the UCL Faculty of Engineering Sciences. K.T. and D.A. are partially supported by the EPSRC grant EP/M022587/1.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TMI.2016.254960

    Space-Alternating Generalized Expectation-Maximization Algorithm

    Full text link
    The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. The paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. The authors prove that the sequence of estimates monotonically increases the penalized-likelihood objective, derive asymptotic convergence rates, and provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, the SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85886/1/Fessler103.pd

    Reduction of Limited Angle Artifacts in Medical Tomography via Image Reconstruction

    Get PDF
    Artifacts are unwanted effects in tomographic images that do not reflect the nature of the object. Their widespread occurrence makes their reduction and if possible removal an important subject in the development of tomographic image reconstruction algorithms. Limited angle artifacts are caused by the limited angular measurements, constraining the available tomographic information. This thesis focuses on reducing these artifacts via image reconstruction in two cases of incomplete measurements from: (1) the gaps left after the removal of high density objects such as dental fillings, screws and implants in computed tomography (CT) and (2) partial ring scanner configurations in positron emission tomography (PET). In order to include knowledge about the measurement and noise, prior terms were used within the reconstruction methods. Careful consideration was given to the trade-off between image blurring and noise reduction upon reconstruction of low-dose measurements.Development of reconstruction methods is an incremental process starting with testing on simple phantoms towards more clinically relevant ones by modeling the respective physical processes involved. In this work, phantoms were constructed to ensure that the proposed reconstruction methods addressed to the limited angle problem. The reconstructed images were assessed qualitatively and quantitatively in terms of noise reduction, edge sharpness and contrast recovery.Maximum a posteriori (MAP) estimation with median root prior (MRP) was selected for the reconstruction of limited angle measurements. MAP with MRP successfully reduced the artifacts caused by limited angle data in various datasets, tested with the reconstruction of both list-mode and projection data. In all cases, its performance was found to be superior to conventional reconstruction methods such as total-variation (TV) prior, maximum likelihood expectation maximization (MLEM) and filtered backprojection (FBP). MAP with MRP was also more robust with respect to parameter selection than MAP with TV prior.This thesis demonstrates the wide-range applicability of MAP with MRP in medical tomography, especially in low-dose imaging. Furthermore, we emphasize the importance of developing and testing reconstruction methods with application-specific phantoms, together with the properties and limitations of the measurements in mind

    Numerical methods for low-dose EDS tomography

    Get PDF
    Energy-dispersive X-ray spectroscopic (EDS) tomography is a powerful three-dimensional (3D) imaging technique for characterizing the chemical composition and structure of nanomaterials. However, the accuracy and resolution are typically hampered by the limited number of tilt images that can be measured and the low signal-to-noise ratios (SNRs) of the energy-resolved tilt images. Various sophisticated reconstruction algorithms have been proposed for specific types of samples and imaging conditions, yet deciding on which algorithm to use for each new case remains a complex problem. In this paper, we propose to tailor the reconstruction algorithm for EDS tomography in three aspects: (1) model the reconstruction problem based on an accurate assumption of the data statistics; (2) regularize the reconstruction to incorporate prior knowledge; (3) apply bimodal tomography to augment the EDS data with a high-SNR modality. Methods for the three aspects can be combined in one reconstruction procedure as three modules. Therefore, a reconstruction algorithm can be constructed as a ‘recipe’. We also provide guidelines for preparing the recipe based on conditions and assumptions for the data. We investigate the effects of different recipes on both simulated data and real experimental data. The results show that the preferred recipe depends on both acquisition conditions and sample properties, and that the image quality can be enhanced using a properly tailored recipe

    Accurate PET Reconstruction from Reduced Set of Measurements based on GMM

    Full text link
    In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing.Comment: 23 pages, 10 figures, submitted to "Signal Processing" by Elsevie
    corecore