6 research outputs found

    Analytical calculation of volumes-of-intersection for iterative, fully 3-D PET reconstruction

    Get PDF
    Use of iterative algorithms to reconstruct three-dimensional (3-D) positron emission tomography (PET) data requires the computation of the system probability matrix. The pure geometrical contribution can easily be approximated by the length-of-intersection (LOI) between lines-of-response (LOR) and individual voxels. However, more accurate geometrical projectors are desirable. Therefore, we have developed a fast method for the analytical calculation of the 3-D shape and volume of volumes-of-intersection (VOI). This method provides an alternative robust projector with a uniformly continuous sampling of the image space. The enhanced calculation effort is facilitated by using several speedup techniques. Exploiting intrinsic symmetry relations and the sparseness of the system matrix allows to create an efficiently compressed matrix which can be precomputed and completely stored in memory. In addition, a new voxel addressing scheme has been implemented. This scheme avoids time-consuming symmetry transformations of voxel addresses by using an octant-wise symmetrically ordered field of voxels. The above methods have been applied for a fully 3-D, iterative reconstruction of 3-D sinograms recorded with a Siemens/CTI ECAT HR+ PET scanner. A comparison of the performance of the reconstruction using LOI weighting and VOI weighting is presented

    A hardware projector/backprojector pair for 3D PET reconstruction

    Full text link

    3D Forward and Back-Projection for X-Ray CT Using Separable Footprints

    Full text link
    Iterative methods for 3D image reconstruction have the potential to improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography (CT). However, the computation burden of 3D cone-beam forward and back-projectors is one of the greatest challenges facing practical adoption of iterative methods for X-ray CT. Moreover, projector accuracy is also important for iterative methods. This paper describes two new separable footprint (SF) projector methods that approximate the voxel footprint functions as 2D separable functions. Because of the separability of these footprint functions, calculating their integrals over a detector cell is greatly simplified and can be implemented efficiently. The SF-TR projector uses trapezoid functions in the transaxial direction and rectangular functions in the axial direction, whereas the SF-TT projector uses trapezoid functions in both directions. Simulations and experiments showed that both SF projector methods are more accurate than the distance-driven (DD) projector, which is a current state-of-the-art method in the field. The SF-TT projector is more accurate than the SF-TR projector for rays associated with large cone angles. The SF-TR projector has similar computation speed with the DD projector and the SF-TT projector is about two times slower.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85876/1/Fessler5.pd

    Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Get PDF
    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries

    Attenuation correction of myocardial perfusion scintigraphy images without transmission scanning

    Get PDF
    Attenuation correction is essential for reliable interpretation of emission tomography; however the use of transmission measurements to generate attenuation maps is limited by availability of equipment and potential mismatches between the transmission and emission measurements. This work investigates the possibility of estimating an attenuation map using measured scatter data without a transmission scan. A scatter model has been developed that predicts the distribution of photons which have been scattered once. The scatter model has been used as the basis of a maximum likelihood gradient ascent method (SMLGA) to estimate an attenuation map from measured scatter data. The SMLGA algorithm has been combined with an existing algorithm using photopeak data to estimate an attenuation map (MLAA) in order to obtain a more accurate attenuation map than using either algorithm alone. Iterations of the SMLGA-MLAA algorithm are alternated with iterations of the MLEM algorithm to estimate the activity distribution. Initial tests of the algorithm were performed in 2 dimensions using idealised data before extension to 3 dimensions. The basic algorithm has been tested in 3 dimensions using projection data simulated using a Monte Carlo simulator with software phantoms. All soft tissues within the body have similar attenuation characteristics and so only a small number of different values are normally present. A Level-Set technique to restrict the attenuation map to a piecewise constant function has therefore been investigated as a potential way to improve the quality of the reconstructed attenuation map. The basic SMLGA-MLAA algorithm contains a number of assumptions; the effect of these has been investigated and the model extended to include the effect of photons which are scattered more than once and scatter correction of the photopeak. The effect of different phantom shapes and activity distributions has been assessed and the final algorithm tested using data acquired using a physical phantom

    Les algorithmes de haute résolution en tomographie d'émission par positrons : développement et accélération sur les cartes graphiques

    Full text link
    La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul. Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP. Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes : - Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement. - Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées. Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies.Positron emission tomography (PET) is a molecular imaging modality that uses radiotracers labeled with positron emitting isotopes in order to quantify many biological processes. The clinical applications of this modality are largely in oncology, but it has a potential to be a reference exam for many diseases in cardiology, neurology and pharmacology. In fact, it is intrinsically able to offer the functional information of cellular metabolism with a good sensitivity. The principal limitations of this modality are the limited spatial resolution and the limited accuracy of the quantification. To overcome these limits, the recent PET systems use a huge number of small detectors with better performances. The image reconstruction is also done using accurate algorithms such as the iterative stochastic algorithms. But as a consequence, the time of reconstruction becomes too long for a clinical use. So the acquired data are compressed and the accelerated versions of iterative stochastic algorithms which generally are non convergent are used to perform the reconstruction. Consequently, the obtained performance is compromised. In order to be able to use the complex reconstruction algorithms in clinical applications for the new PET systems, many previous studies were aiming to accelerate these algorithms on GPU devices. Therefore, in this thesis, we joined the effort of researchers for developing and introducing for routine clinical use the accurate reconstruction algorithms that improve the spatial resolution and the accuracy of quantification for PET. Therefore, we first worked to develop the new strategies for accelerating on GPU devices the reconstruction from list mode acquisition. In fact, this mode offers many advantages over the histogram-mode, such as motion correction, the possibility of using time-of-flight (TOF) information to improve the quantification accuracy, the possibility of using temporal basis functions to perform 4D reconstruction and extract kinetic parameters with better accuracy directly from the acquired data. But, one of the main obstacles that limits the use of list-mode reconstruction approach for routine clinical use is the relatively long reconstruction time. To overcome this obstacle we : developed a new strategy to accelerate on GPU devices fully 3D list mode ordered-subset expectation-maximization (LM-OSEM) algorithm, including the calculation of the sensitivity matrix that accounts for the patient-specific attenuation and normalisation corrections. The reported reconstruction are not only compatible with a clinical use of 3D LM-OSEM algorithms, but also lets us envision fast reconstructions for advanced PET applications such as real time dynamic studies and parametric image reconstructions. developed and implemented on GPU a multigrid/multiframe approach of an expectation-maximization algorithm for list-mode acquisitions (MGMF-LMEM). The objective is to develop new strategies to accelerate the reconstruction of gold standard LMEM (list-mode expectation-maximization) algorithm which converges slowly. The GPU-based MGMF-LMEM algorithm processed data at a rate close to one million of events per second per iteration, and permits to perform near real-time reconstructions for large acquisitions or low-count acquisitions such as gated studies. Moreover, for clinical use, the quantification is often done from acquired data organized in sinograms. This data is generally compressed in order to accelerate reconstruction. But previous works have shown that this approach to accelerate the reconstruction decreases the accuracy of quantification and the spatial resolution. The ordered-subset expectation-maximization (OSEM) is the most used reconstruction algorithm from sinograms in clinic. Thus, we parallelized and implemented the attenuation-weighted line-of-response OSEM (AW-LOR-OSEM) algorithm which allows a PET image reconstruction from sinograms without any data compression and incorporates the attenuation and normalization corrections in the sensitivity matrices as weight factors. We compared two strategies of implementation: in the first, the system matrix (SM) is calculated on the fly during the reconstruction, while the second implementation uses a precalculated SM more accurately. The results show that the computational efficiency is about twice better for the implementation using calculated SM on-the-fly than the implementation using pre-calculated SM, but the reported reconstruction times are compatible with a clinical use for both strategies
    corecore