1,529 research outputs found

    Expectation Propagation for Poisson Data

    Get PDF
    The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g., the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updates the Gaussian approximation to one factor of the posterior distribution by moment matching. We derive explicit update formulas in terms of one-dimensional integrals, and also discuss stable and efficient quadrature rules for evaluating these integrals. The method is showcased on two-dimensional PET images.Comment: 25 pages, to be published at Inverse Problem

    Post-Reconstruction Deconvolution of PET Images by Total Generalized Variation Regularization

    Full text link
    Improving the quality of positron emission tomography (PET) images, affected by low resolution and high level of noise, is a challenging task in nuclear medicine and radiotherapy. This work proposes a restoration method, achieved after tomographic reconstruction of the images and targeting clinical situations where raw data are often not accessible. Based on inverse problem methods, our contribution introduces the recently developed total generalized variation (TGV) norm to regularize PET image deconvolution. Moreover, we stabilize this procedure with additional image constraints such as positivity and photometry invariance. A criterion for updating and adjusting automatically the regularization parameter in case of Poisson noise is also presented. Experiments are conducted on both synthetic data and real patient images.Comment: First published in the Proceedings of the 23rd European Signal Processing Conference (EUSIPCO-2015) in 2015, published by EURASI

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Accurate PET Reconstruction from Reduced Set of Measurements based on GMM

    Full text link
    In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing.Comment: 23 pages, 10 figures, submitted to "Signal Processing" by Elsevie

    Stochastic Optimisation Methods Applied to PET Image Reconstruction

    Get PDF
    Positron Emission Tomography (PET) is a medical imaging technique that is used to pro- vide functional information regarding physiological processes. Statistical PET reconstruc- tion attempts to estimate the distribution of radiotracer in the body but this methodology is generally computationally demanding because of the use of iterative algorithms. These algorithms are often accelerated by the utilisation of data subsets, which may result in con- vergence to a limit set rather than the unique solution. Methods exist to relax the update step sizes of subset algorithms but they introduce additional heuristic parameters that may result in extended reconstruction times. This work investigates novel methods to modify subset algorithms to converge to the unique solution while maintaining the acceleration benefits of subset methods. This work begins with a study of an automatic method for increasing subset sizes, called AutoSubsets. This algorithm measures the divergence between two distinct data subset update directions and, if significant, the subset size is increased for future updates. The algorithm is evaluated using both projection and list mode data. The algorithm’s use of small initial subsets benefits early reconstruction but unfortunately, at later updates, the subsets size increases too early, which impedes convergence rates. The main part of this work investigates the application of stochastic variance reduction optimisation algorithms to PET image reconstruction. These algorithms reduce variance due to the use of subsets by incorporating previously computed subset gradients into the update direction. The algorithms are adapted for the application to PET reconstruction. This study evaluates the reconstruction performance of these algorithms when applied to various 3D non-TOF PET simulated, phantom and patient data sets. The impact of a number of algorithm parameters are explored, which includes: subset selection methodologies, the number of subsets, step size methodologies and preconditioners. The results indicate that these stochastic variance reduction algorithms demonstrate superior performance after only a few epochs when compared to a standard PET reconstruction algorithm

    Scalable Bayesian inversion with Poisson data

    Get PDF
    Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks
    corecore