3,175 research outputs found

    Tomographic image quality of rotating slat versus parallel hole-collimated SPECT

    Get PDF
    Parallel and converging hole collimators are most frequently used in nuclear medicine. Less common is the use of rotating slat collimators for single photon emission computed tomography (SPECT). The higher photon collection efficiency, inherent to the geometry of rotating slat collimators, results in much lower noise in the data. However, plane integrals contain spatial information in only one direction, whereas line integrals provide two-dimensional information. It is not a trivial question whether the initial gain in efficiency will compensate for the lower information content in the plane integrals. Therefore, a comparison of the performance of parallel hole and rotating slat collimation is needed. This study compares SPECT with rotating slat and parallel hole collimation in combination with MLEM reconstruction with accurate system modeling and correction for scatter and attenuation. A contrast-to-noise study revealed an improvement of a factor 2-3 for hot lesions and more than a factor of 4 for cold lesion. Furthermore, a clinically relevant case of heart lesion detection is simulated for rotating slat and parallel hole collimators. In this case, rotating slat collimators outperform the traditional parallel hole collimators. We conclude that rotating slat collimators are a valuable alternative for parallel hole collimators

    Attenuation correction of myocardial perfusion scintigraphy images without transmission scanning

    Get PDF
    Attenuation correction is essential for reliable interpretation of emission tomography; however the use of transmission measurements to generate attenuation maps is limited by availability of equipment and potential mismatches between the transmission and emission measurements. This work investigates the possibility of estimating an attenuation map using measured scatter data without a transmission scan. A scatter model has been developed that predicts the distribution of photons which have been scattered once. The scatter model has been used as the basis of a maximum likelihood gradient ascent method (SMLGA) to estimate an attenuation map from measured scatter data. The SMLGA algorithm has been combined with an existing algorithm using photopeak data to estimate an attenuation map (MLAA) in order to obtain a more accurate attenuation map than using either algorithm alone. Iterations of the SMLGA-MLAA algorithm are alternated with iterations of the MLEM algorithm to estimate the activity distribution. Initial tests of the algorithm were performed in 2 dimensions using idealised data before extension to 3 dimensions. The basic algorithm has been tested in 3 dimensions using projection data simulated using a Monte Carlo simulator with software phantoms. All soft tissues within the body have similar attenuation characteristics and so only a small number of different values are normally present. A Level-Set technique to restrict the attenuation map to a piecewise constant function has therefore been investigated as a potential way to improve the quality of the reconstructed attenuation map. The basic SMLGA-MLAA algorithm contains a number of assumptions; the effect of these has been investigated and the model extended to include the effect of photons which are scattered more than once and scatter correction of the photopeak. The effect of different phantom shapes and activity distributions has been assessed and the final algorithm tested using data acquired using a physical phantom

    MRI-Based Attenuation Correction in Emission Computed Tomography

    Get PDF
    The hybridization of magnetic resonance imaging (MRI) with positron emission tomography (PET) or single photon emission computed tomography (SPECT) enables the collection of an assortment of biological data in spatial and temporal register. However, both PET and SPECT are subject to photon attenuation, a process that degrades image quality and precludes quantification. To correct for the effects of attenuation, the spatial distribution of linear attenuation coefficients (μ-coefficients) within and about the patient must be available. Unfortunately, extracting μ-coefficients from MRI is non-trivial. In this thesis, I explore the problem of MRI-based attenuation correction (AC) in emission tomography. In particular, I began by asking whether MRI-based AC would be more reliable in PET or in SPECT. To this end, I implemented an MRI-based AC algorithm relying on image segmentation and applied it to phantom and canine emission data. The subsequent analysis revealed that MRI-based AC performed better in SPECT than PET, which is interesting since AC is more challenging in SPECT than PET. Given this result, I endeavoured to improve MRI-based AC in PET. One problem that required addressing was that the lungs yield very little signal in MRI, making it difficult to infer their μ-coefficients. By using a pulse sequence capable of visualizing lung parenchyma, I established a linear relationship between MRI signal and the lungs’ μ-coefficients. I showed that applying this mapping on a voxel-by-voxel basis improved quantification in PET reconstructions compared to conventional MRI-based AC techniques. Finally, I envisaged that a framework for MRI-based AC methods would potentiate further improvements. Accordingly, I identified three ways an MRI can be converted to μ-coefficients: 1) segmentation, wherein the MRI is divided into tissue types and each is assigned an μ-coefficient, 2) registration, wherein a template of μ-coefficients is aligned with the MRI, and 3) mapping, wherein a function maps MRI voxels to μ-coefficients. I constructed an algorithm for each method and catalogued their strengths and weaknesses. I concluded that a combination of approaches is desirable for MRI-based AC. Specifically, segmentation is appropriate for air, fat, and water, mapping is appropriate for lung, and registration is appropriate for bone

    Positron-Emission Tomography

    Full text link
    We review positron-emission tomography (PET), which has inherent advantages that avoid the shortcomings of other nuclear medicine imaging methods. PET image reconstruction methods with origins in signal and image processing are discussed, including the potential problems of these methods. A summary of statistical image reconstruction methods, which can yield improved image quality, is also presented.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85853/1/Fessler95.pd

    Characterization and Compensation of Hysteretic Cardiac Respiratory Motion in Myocardial Perfusion Studies Through MRI Investigations

    Get PDF
    Respiratory motion causes artifacts and blurring of cardiac structures in reconstructed images of SPECT and PET cardiac studies. Hysteresis in respiratory motion causes the organs to move in distinct paths during inspiration and expiration. Current respiratory motion correction methods use a signal generated by tracking the motion of the abdomen during respiration to bin list- mode data as a function of the magnitude of this respiratory signal. They thereby fail to account for hysteretic motion. The goal of this research was to demonstrate the effects of hysteretic respiratory motion and the importance of its correction for different medical imaging techniques particularly SPECT and PET. This study describes a novel approach for detecting and correcting hysteresis in clinical SPECT and PET studies. From the combined use of MRI and a synchronized Visual Tracking System (VTS) in volunteers we developed hysteretic modeling using the Bouc-Wen model with inputs from measurements of both chest and abdomen respiratory motion. With the MRI determined heart motion as the truth in the volunteer studies we determined the Bouc Wen model could match the behavior over a range of hysteretic cycles. The proposed approach was validated through phantom simulations and applied to clinical SPECT studies

    Quantification of dopaminergic neurotransmission SPECT studies with 123 l-labelled radioligands

    Get PDF
    Dopaminergic neurotransmission SPECT studies with 123I-labelled radioligands can help in the diagnosis of neurological and psychiatric disorders such as Parkinson¿s disease and schizophrenia. Nowadays, interpretation of SPECT images is based mainly on visual assessment by experienced observers. However, a quantitative evaluation of the images is recommended in current clinical guidelines. Quantitative information can help diagnose the disease at the early pre-clinical stages, follow its progression and assess the effects of treatment strategies. SPECT images are affected by a number of effects that are inherent in the image formation: attenuation and scattering of photons, system response and partial volume effect. These effects degrade the contrast and resolution of the images and, as a consequence, the real activity distribution of the radiotracer is distorted. Whilst the photon emission of 123I is dominated by a low-energy line of 159 keV, it also emits several high-energy lines. When 123I-labelled radioligands are used, a non-negligible fraction of high-energy photons undergoes backscattering in the detector and the gantry and reaches the detector within the energy window. In this work, a complete methodology for the compensation of all the degrading effects involved in dopaminergic neurotransmission SPECT imaging with 123I is presented. The proposed method uses Monte Carlo simulation to estimate the scattered photons detected in the projections. For this purpose, the SimSET Monte Carlo code was modified so as to adapt it to the more complex simulation of high-energy photons emitted by 123I. Once validated, the modified SimSET code was used to simulate 123I SPECT studies of an anthropomorphic striatal phantom using different imaging systems. The projections obtained showed that scatter is strongly dependent on the imaging system and comprises at least 40% of the detected photons. Applying the new methodology demonstrated that absolute quantification can be achieved when the method includes accurate compensations for all the degrading effects. When the method did not include correction for all degradations, calculated values depended on the imaging system, although a linear relationship was found between calculated and true values. It was also found that partial volume effect and scatter corrections play a major role in the recovery of nominal values. Despite the advantages of absolute quantification, the computational and methodological requirements needed severely limit the possibility of application in clinical routine. Thus, for the time being, absolute quantification is limited to academic studies and research trials. In a clinical context, reliable, simple and rapid methods are needed, thus, semi-quantitative methods are used. Diagnosis also requires the establishment of robust reference values for healthy controls. These values are usually derived from a large data pool obtained in multicentre clinical trials. The comparison between the semi-quantitative values obtained from a patient and the reference is only feasible if the quantitative values have been previously standardised, i.e. they are independent of the gamma camera, acquisition protocol, reconstruction parameters and quantification procedure applied. Thus, standardisation requires that the calculated values are compensated somehow for all the image-degrading phenomena. In this thesis dissertation, a methodology for the standardisation of the quantitative values extracted from dopaminergic neurotransmission SPECT studies with 123I is evaluated using Monte Carlo simulation. This methodology is based on the linear relationship found between calculated and true values for a group of studies corresponding to different subjects with non-negligible anatomical and tracer uptake differences. Reconstruction and quantification methods were found to have a high impact on the linearity of the relationship and on the accuracy of the standardised results

    Relevance of accurate Monte Carlo modeling in nuclear medical imaging

    Get PDF
    Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. This paper presents derivation and methodological basis for this approach and critically reviews their areas of application in nuclear imaging. An overview of existing simulation programs is provided and illustrated with examples of some useful features of such sophisticated tools in connection with common computing facilities and more powerful multiple-processor parallel processing systems. Current and future trends in the field are also discussed
    corecore