1,515 research outputs found

    Improved correction for the tissue fraction effect in lung PET/CT imaging

    Get PDF
    Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this 'tissue fraction effect' (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K1 and Ki along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34-80% in the best case and 29-96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted

    Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    Get PDF
    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements, and clinical studies. Accurate Monte Carlo modelling is still the gold standard since it allows to separate scattered and unscattered events and compare the estimated and true unscattered component. Results: In this study, our modified version of Monte Carlo-based scatter correction (MCBSC2) seems to provide a good contrast recovery on the simulated Utah phantom, while the DEW method was found to be clearly superior for the experimental phantom studies in terms of quantitative accuracy at the expense of a significant deterioration of the signal-to-noise ratio. On the other hand, the immunity to noise in emission data of statistical reconstruction-based scatter correction methods make them particularly applicable to low-count emission studies. All scatter correction methods give very good activity recovery values for the simulated 3D Hoffman brain phantom which average within 3%. The CVS and MCBSC1 techniques tend to overcorrect while SRBSC undercorrects for scatter in most regions of this phantom. Conclusion: It was concluded that all correction methods significantly improved the image quality and contrast compared to the case where no correction is applied. Generally, it was shown that the differences in the estimated scatter distributions did not have a significant impact on the final quantitative results. The DEW method showed the best compromise between ease of implementation and quantitative accuracy, but significantly deteriorates the signal-noise ratio

    Simulation of Clinical PET Studies for the Assessment of Quantification Methods

    Get PDF
    On this PhD thesis we developed a methodology for evaluating the robustness of SUV measurements based on MC simulations and the generation of novel databases of simulated studies based on digital anthropomorphic phantoms. This methodology has been applied to different problems related to quantification that were not previously addressed. Two methods for estimating the extravasated dose were proposed andvalidated in different scenarios using MC simulations. We studied the impact of noise and low counting in the accuracy and repeatability of three commonly used SUV metrics (SUVmax, SUVmean and SUV50). The same model was used to study the effect of physiological muscular uptake variations on the quantification of FDG-PET studies. Finally, our MC models were applied to simulate 18F-fluorocholine (FCH) studies. The aim was to study the effect of spill-in counts from neighbouring regions on the quantification of small regions close to high activity extended sources

    Positron Emission Tomography: Current Challenges and Opportunities for Technological Advances in Clinical and Preclinical Imaging Systems

    Get PDF
    Positron emission tomography (PET) imaging is based on detecting two time-coincident high-energy photons from the emission of a positronemitting radioisotope. The physics of the emission, and the detection of the coincident photons, give PET imaging unique capabilities for both very high sensitivity and accurate estimation of the in vivo concentration of the radiotracer. PET imaging has been widely adopted as an important clinical modality for oncological, cardiovascular, and neurological applications. PET imaging has also become an important tool in preclinical studies, particularly for investigating murine models of disease and other small-animal models. However, there are several challenges to using PET imaging systems. These include the fundamental trade-offs between resolution and noise, the quantitative accuracy of the measurements, and integration with X-ray computed tomography and magnetic resonance imaging. In this article, we review how researchers and industry are addressing these challenges.This work was supported in part by National Institutes of Health grants R01-CA042593, U01-CA148131, R01CA160253, R01CA169072, and R01CA164371; by Human Frontier Science Program grant RGP0004/2013; and by the Innovative Medicines Initiative under grant agreement 115337, which comprises financial contributions from the European Union’s Seventh Framework Program (FP7/2007–2013

    Developments in PET-MRI for Radiotherapy Planning Applications

    Get PDF
    The hybridization of magnetic resonance imaging (MRI) and positron emission tomography (PET) provides the benefit of soft-tissue contrast and specific molecular information in a simultaneous acquisition. The applications of PET-MRI in radiotherapy are only starting to be realised. However, quantitative accuracy of PET relies on accurate attenuation correction (AC) of, not only the patient anatomy but also MRI hardware and current methods, which are prone to artefacts caused by dense materials. Quantitative accuracy of PET also relies on full characterization of patient motion during the scan. The simultaneity of PET-MRI makes it especially suited for motion correction. However, quality assurance (QA) procedures for such corrections are lacking. Therefore, a dynamic phantom that is PET and MR compatible is required. Additionally, respiratory motion characterization is needed for conformal radiotherapy of lung. 4D-CT can provide 3D motion characterization but suffers from poor soft-tissue contrast. In this thesis, I examine these problems, and present solutions in the form of improved MR-hardware AC techniques, a PET/MRI/CT-compatible tumour respiratory motion phantom for QA measurements, and a retrospective 4D-PET-MRI technique to characterise respiratory motion. Chapter 2 presents two techniques to improve upon current AC methods that use a standard helical CT scan for MRI hardware in PET-MRI. One technique uses a dual-energy computed tomography (DECT) scan to construct virtual monoenergetic image volumes and the other uses a tomotherapy linear accelerator to create CT images at megavoltage energies (1.0 MV) of the RF coil. The DECT-based technique reduced artefacts in the images translating to improved Ό-maps. The MVCT-based technique provided further improvements in artefact reduction, resulting in artefact free Ό-maps. This led to more AC of the breast coil. In chapter 3, I present a PET-MR-CT motion phantom for QA of motion-correction protocols. This phantom is used to evaluate a clinically available real-time dynamic MR images and a respiratory-triggered PET-MRI protocol. The results show the protocol to perform well under motion conditions. Additionally, the phantom provided a good model for performing QA of respiratory-triggered PET-MRI. Chapter 4 presents a 4D-PET/MRI technique, using MR sequences and PET acquisition methods currently available on hybrid PET/MRI systems. This technique is validated using the motion phantom presented in chapter 3 with three motion profiles. I conclude that our 4D-PET-MRI technique provides information to characterise tumour respiratory motion while using a clinically available pulse sequence and PET acquisition method

    Incorporating accurate statistical modeling in PET: reconstruction for whole-body imaging

    Get PDF
    Tese de doutoramento em BiofĂ­sica, apresentada Ă  Universidade de Lisboa atravĂ©s da Faculdade de CiĂȘncias, 2007The thesis is devoted to image reconstruction in 3D whole-body PET imaging. OSEM ( Ordered Subsets Expectation maximization ) is a statistical algorithm that assumes Poisson data. However, corrections for physical effects (attenuation, scattered and random coincidences) and detector efficiency remove the Poisson characteristics of these data. The Fourier Rebinning (FORE), that combines 3D imaging with fast 2D reconstructions, requires corrected data. Thus, if it will be used or whenever data are corrected prior to OSEM, the need to restore the Poisson-like characteristics is present. Restoring Poisson-like data, i.e., making the variance equal to the mean, was achieved through the use of weighted OSEM algorithms. One of them is the NECOSEM, relying on the NEC weighting transformation. The distinctive feature of this algorithm is the NEC multiplicative factor, defined as the ratio between the mean and the variance. With real clinical data this is critical, since there is only one value collected for each bin the data value itself. For simulated data, if we keep track of the values for these two statistical moments, the exact values for the NEC weights can be calculated. We have compared the performance of five different weighted algorithms (FORE+AWOSEM, FORE+NECOSEM, ANWOSEM3D, SPOSEM3D and NECOSEM3D) on the basis of tumor detectablity. The comparison was done for simulated and clinical data. In the former case an analytical simulator was used. This is the ideal situation, since all the weighting factors can be exactly determined. For comparing the performance of the algorithms, we used the Non-Prewhitening Matched Filter (NPWMF) numerical observer. With some knowledge obtained from the simulation study we proceeded to the reconstruction of clinical data. In that case, it was necessary to devise a strategy for estimating the NEC weighting factors. The comparison between reconstructed images was done by a physician largely familiar with whole-body PET imaging

    Sensitivity correction of images obtained with the prototype Clear-PEM in pre-clinical environment

    Get PDF
    Dissertation presented at Faculdade de CiĂȘncias e Tecnologia Universidade Nova de Lisboa to obtain a Master Degree in Biomedical EngineeringNuclear medicine has, when compared to anatomical imaging techniques, the great advantage of identifying the metabolic activity of the cells, hence becoming a great option for tumour identification. A new technology in this area is Positron Emission Mammography (PEM) that follows the same physical basics of Positron Emission Tomography (PET). The Clear-PEM project, a Portuguese research project, uses this technology and, in alternative to the whole-body exam, only the breast is examined, using two detector plates that rotate around the breast to detect radiation. The prototype has the ability to perform a complementary exam of the axillary region. This scanner is designed to detect small lesions or tumours in early stages, with high resolution and high sensitivity. After the acquisition, the data undergoes a process of reconstruction and corrections. It is our job to study which parameters should be adjusted in order to get the best contrast between lesions and the breast background, as well as meeting the high resolution standards we set to achieve. This work consisted in the correction of some characteristics that might influence image quality. The first correction made was the elimination of the presence of the gaps between the detector crystals’ effects, resulting in the enhancement of the image Signal-to-Noise Ratio (SNR). By varying the energy window of the image acquisitions, it was possible to minimize the effect of scattered photons, and varying the timing window minimized the effect of random coincidences

    Performance and Methodological Aspects in Positron Emission Tomography

    Get PDF
    Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movementSiirretty Doriast

    A hybrid 3-D reconstruction/registration algorithm for correction of head motion in emission tomography

    Get PDF
    Even with head restraint, small head movements can occur during data acquisition in emission tomography that are sufficiently large to result in detectable artifacts in the final reconstruction. Direct measurement of motion can be cumbersome and difficult to implement, whereas previous attempts to use the measured projection data for correction have been limited to simple translation orthogonal to the projection. A fully three-dimensional (3-D) algorithm is proposed that estimates the patient orientation based on the projection of motion-corrupted data, with incorporation of motion information within subsequent ordered-subset expectation-maximization subiterations. Preliminary studies have been performed using a digital version of the Hoffman brain phantom. Movement was simulated by constructing a mixed set of projections in discrete positions of the phantom. The algorithm determined the phantom orientation that best matched each constructed projection with its corresponding measured projection. In the case of a simulated single movement in 24 of 64 projections, all misaligned projections were correctly identified. Incorporating data at the determined object orientation resulted in a reduction of mean square difference (MSD) between motion-corrected and motion-free reconstructions, compared to the MSD between uncorrected and motion-free reconstructions, by a factor of 1.9
    • 

    corecore