162 research outputs found

    Topics in image reconstruction for high resolution positron emission tomography

    Get PDF
    Les problĂšmes mal posĂ©s reprĂ©sentent un sujet d'intĂ©rĂȘt interdisciplinaire qui surgires dans la tĂ©lĂ©dĂ©tection et des applications d'imagerie. Cependant, il subsiste des questions cruciales pour l'application rĂ©ussie de la thĂ©orie Ă  une modalitĂ© d'imagerie. La tomographie d'Ă©mission par positron (TEP) est une technique d'imagerie non-invasive qui permet d'Ă©valuer des processus biochimiques se dĂ©roulant Ă  l'intĂ©rieur d'organismes in vivo. La TEP est un outil avantageux pour la recherche sur la physiologie normale chez l'humain ou l'animal, pour le diagnostic et le suivi thĂ©rapeutique du cancer, et l'Ă©tude des pathologies dans le coeur et dans le cerveau. La TEP partage plusieurs similaritĂ©s avec d'autres modalitĂ©s d'imagerie tomographiques, mais pour exploiter pleinement sa capacitĂ© Ă  extraire le maximum d'information Ă  partir des projections, la TEP doit utiliser des algorithmes de reconstruction d'images Ă  la fois sophistiquĂ©e et pratiques. Plusieurs aspects de la reconstruction d'images TEP ont Ă©tĂ© explorĂ©s dans le prĂ©sent travail. Les contributions suivantes sont d'objet de ce travail: Un modĂšle viable de la matrice de transition du systĂšme a Ă©tĂ© Ă©laborĂ©, utilisant la fonction de rĂ©ponse analytique des dĂ©tecteurs basĂ©e sur l'attĂ©nuation linĂ©aire des rayons y dans un banc de dĂ©tecteur. Nous avons aussi dĂ©montrĂ© que l'utilisation d'un modĂšle simplifiĂ© pour le calcul de la matrice du systĂšme conduit Ă  des artefacts dans l'image. (IEEE Trans. Nucl. Sei., 2000) );> La modĂ©lisation analytique de la dĂ©pendance dĂ©crite Ă  l'Ă©gard de la statistique des images a simplifiĂ© l'utilisation de la rĂšgle d'arrĂȘt par contre-vĂ©rification (CV) et a permis d'accĂ©lĂ©rer la reconstruction statistique itĂ©rative. Cette rĂšgle peut ĂȘtre utilisĂ©e au lieu du procĂ©dĂ© CV original pour des projections aux taux de comptage Ă©levĂ©s, lorsque la rĂšgle CV produit des images raisonnablement prĂ©cises. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposĂ© une mĂ©thodologie de rĂ©gularisation utilisant la dĂ©composition en valeur propre (DVP) de la matrice du systĂšme basĂ©e sur l'analyse de la rĂ©solution spatiale. L'analyse des caractĂ©ristiques du spectre de valeurs propres nous a permis d'identifier la relation qui existe entre le niveau optimal de troncation du spectre pour la reconstruction DVP et la rĂ©solution optimale dans l'image reconstruite. (IEEE Trans. Nucl. Sei., 2001) Nous avons proposĂ© une nouvelle technique linĂ©aire de reconstruction d'image Ă©vĂ©nement-par-Ă©vĂ©nement basĂ©e sur la matrice pseudo-inverse rĂ©gularisĂ©e du systĂšme. L'algorithme reprĂ©sente une façon rapide de mettre Ă  jour une image, potentiellement en temps rĂ©el, et permet, en principe, la visualisation instantanĂ©e de distribution de la radioactivitĂ© durant l'acquisition des donnĂ©es tomographiques. L'image ainsi calculĂ©e est la solution minimisant les moindres carrĂ©s du problĂšme inverse rĂ©gularisĂ©.Abstract: Ill-posed problems are a topic of an interdisciplinary interest arising in remote sensing and non-invasive imaging. However, there are issues crucial for successful application of the theory to a given imaging modality. Positron emission tomography (PET) is a non-invasive imaging technique that allows assessing biochemical processes taking place in an organism in vivo. PET is a valuable tool in investigation of normal human or animal physiology, diagnosing and staging cancer, heart and brain disorders. PET is similar to other tomographie imaging techniques in many ways, but to reach its full potential and to extract maximum information from projection data, PET has to use accurate, yet practical, image reconstruction algorithms. Several topics related to PET image reconstruction have been explored in the present dissertation. The following contributions have been made: (1) A system matrix model has been developed using an analytic detector response function based on linear attenuation of [gamma]-rays in a detector array. It has been demonstrated that the use of an oversimplified system model for the computation of a system matrix results in image artefacts. (IEEE Trans. Nucl. Sci., 2000); (2) The dependence on total counts modelled analytically was used to simplify utilisation of the cross-validation (CV) stopping rule and accelerate statistical iterative reconstruction. It can be utilised instead of the original CV procedure for high-count projection data, when the CV yields reasonably accurate images. (IEEE Trans. Nucl. Sci., 2001); (3) A regularisation methodology employing singular value decomposition (SVD) of the system matrix was proposed based on the spatial resolution analysis. A characteristic property of the singular value spectrum shape was found that revealed a relationship between the optimal truncation level to be used with the truncated SVD reconstruction and the optimal reconstructed image resolution. (IEEE Trans. Nucl. Sci., 2001); (4) A novel event-by-event linear image reconstruction technique based on a regularised pseudo-inverse of the system matrix was proposed. The algorithm provides a fast way to update an image potentially in real time and allows, in principle, for the instant visualisation of the radioactivity distribution while the object is still being scanned. The computed image estimate is the minimum-norm least-squares solution of the regularised inverse problem

    Squared Extrapolation Methods (SQUAREM): A New Class of Simple and Efficient Numerical Schemes for Accelerating the Convergence of the EM Algorithm

    Get PDF
    We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding

    Statistical Inference in Positron Emission Tomography

    Get PDF
    In this report, we investigate mathematical algorithms for image reconstruction in the context of positron emission tomography (a medical diagnosis technique). We first take inspiration from the physics of PET to design a mathematical model tailored to the problem. We think of positron emissions as an output of an indirectly observed Poisson process and formulate the link between the emissions and the scanner records through the Radon transform. This model allows us to express the image reconstruction in terms of a standard problem in statistical estimation from incomplete data. Then, we investigate different algorithms as well as stopping criterion, and compare their relative efficiency

    Level Set Method for Positron Emission Tomography

    Get PDF
    In positron emission tomography (PET), a radioactive compound is injected into the body to promote a tissue-dependent emission rate. Expectation maximization (EM) reconstruction algorithms are iterative techniques which estimate the concentration coefficients that provide the best fitted solution, for example, a maximum likelihood estimate. In this paper, we combine the EM algorithm with a level set approach. The level set method is used to capture the coarse scale information and the discontinuities of the concentration coefficients. An intrinsic advantage of the level set formulation is that anatomical information can be efficiently incorporated and used in an easy and natural way. We utilize a multiple level set formulation to represent the geometry of the objects in the scene. The proposed algorithm can be applied to any PET configuration, without major modifications

    Robust Framework for PET Image Reconstruction Incorporating System and Measurement Uncertainties

    Get PDF
    In Positron Emission Tomography (PET), an optimal estimate of the radioactivity concentration is obtained from the measured emission data under certain criteria. So far, all the well-known statistical reconstruction algorithms require exactly known system probability matrix a priori, and the quality of such system model largely determines the quality of the reconstructed images. In this paper, we propose an algorithm for PET image reconstruction for the real world case where the PET system model is subject to uncertainties. The method counts PET reconstruction as a regularization problem and the image estimation is achieved by means of an uncertainty weighted least squares framework. The performance of our work is evaluated with the Shepp-Logan simulated and real phantom data, which demonstrates significant improvements in image quality over the least squares reconstruction efforts

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Spatial Resolution Properties of Penalized-Likelihood Image Reconstruction: Space-Invariant Tomographs

    Full text link
    This paper examines the spatial resolution properties of penalized-likelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce space-variant local impulse response functions, even for space-invariant tomographic systems. Paradoxically, for emission image reconstruction, the local resolution is generally poorest in high-count regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalized-likelihood methods.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85890/1/Fessler97.pd

    Scalable Bayesian inversion with Poisson data

    Get PDF
    Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks

    Regularization for Uniform Spatial Resolution Properties in Penalized-Likelihood Image Reconstruction

    Full text link
    Traditional space-invariant regularization methods in tomographic image reconstruction using penalized-likelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are space variant, asymmetric, and object-dependent even for space invariant imaging systems. The authors propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the least-squares fitting of a parameterized local impulse response to a desired global response. The authors have developed computationally efficient methods for PET systems with shift-invariant geometric responses. They demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85867/1/Fessler79.pd

    Incorporating accurate statistical modeling in PET: reconstruction for whole-body imaging

    Get PDF
    Tese de doutoramento em BiofĂ­sica, apresentada Ă  Universidade de Lisboa atravĂ©s da Faculdade de CiĂȘncias, 2007The thesis is devoted to image reconstruction in 3D whole-body PET imaging. OSEM ( Ordered Subsets Expectation maximization ) is a statistical algorithm that assumes Poisson data. However, corrections for physical effects (attenuation, scattered and random coincidences) and detector efficiency remove the Poisson characteristics of these data. The Fourier Rebinning (FORE), that combines 3D imaging with fast 2D reconstructions, requires corrected data. Thus, if it will be used or whenever data are corrected prior to OSEM, the need to restore the Poisson-like characteristics is present. Restoring Poisson-like data, i.e., making the variance equal to the mean, was achieved through the use of weighted OSEM algorithms. One of them is the NECOSEM, relying on the NEC weighting transformation. The distinctive feature of this algorithm is the NEC multiplicative factor, defined as the ratio between the mean and the variance. With real clinical data this is critical, since there is only one value collected for each bin the data value itself. For simulated data, if we keep track of the values for these two statistical moments, the exact values for the NEC weights can be calculated. We have compared the performance of five different weighted algorithms (FORE+AWOSEM, FORE+NECOSEM, ANWOSEM3D, SPOSEM3D and NECOSEM3D) on the basis of tumor detectablity. The comparison was done for simulated and clinical data. In the former case an analytical simulator was used. This is the ideal situation, since all the weighting factors can be exactly determined. For comparing the performance of the algorithms, we used the Non-Prewhitening Matched Filter (NPWMF) numerical observer. With some knowledge obtained from the simulation study we proceeded to the reconstruction of clinical data. In that case, it was necessary to devise a strategy for estimating the NEC weighting factors. The comparison between reconstructed images was done by a physician largely familiar with whole-body PET imaging
    • 

    corecore