171 research outputs found

    Assessment of the impact of modeling axial compression on PET image reconstruction

    Get PDF
    The file contains phantoms, sinograms and system matrices used in the following work: Martin A. Belzunce and Andrew J. Reader, "Assessment of the Impact of Modelling Axial Compression on PET Image Reconstruction", Medical Physics, 2017. This work is supported by Engineering and Physical Sciences Research Council (EPSRC) under grant EP/M020142/1

    Quantitative imaging of coronary blood flow

    Get PDF
    Positron emission tomography (PET) is a nuclear medicine imaging modality based on the administration of a positron-emitting radiotracer, the imaging of the distribution and kinetics of the tracer, and the interpretation of the physiological events and their meaning with respect to health and disease. PET imaging was introduced in the 1970s and numerous advances in radiotracers and detection systems have enabled this modality to address a wide variety of clinical tasks, such as the detection of cancer, staging of Alzheimer's disease, and assessment of coronary artery disease (CAD). This review provides a description of the logic and the logistics of the processes required for PET imaging and a discussion of its use in guiding the treatment of CAD. Finally, we outline prospects and limitations of nanoparticles as agents for PET imaging

    Photo-Detectors for Time of Flight Positron Emission Tomography (ToF-PET)

    Get PDF
    We present the most recent advances in photo-detector design employed in time of flight positron emission tomography (ToF-PET). PET is a molecular imaging modality that collects pairs of coincident (temporally correlated) annihilation photons emitted from the patient body. The annihilation photon detector typically comprises a scintillation crystal coupled to a fast photo-detector. ToF information provides better localization of the annihilation event along the line formed by each detector pair, resulting in an overall improvement in signal to noise ratio (SNR) of the reconstructed image. Apart from the demand for high luminosity and fast decay time of the scintillation crystal, proper design and selection of the photo-detector and methods for arrival time pick-off are a prerequisite for achieving excellent time resolution required for ToF-PET. We review the two types of photo-detectors used in ToF-PET: photomultiplier tubes (PMTs) and silicon photo-multipliers (SiPMs) with a special focus on SiPMs

    Recovery and normalization of triple coincidences in PET

    Get PDF
    Purpose: Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose a simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements.Methods: To recover triple coincidences, the authors’ method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners.Results: The addition of triple-coincidence events with the authors’ method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%.Conclusions: Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.This work was funded by Consejería de Educación, Juventud y Deporte de la Comunidad de Madrid (Spain) through the Madrid-MIT M + Visión Consortium. The authors also acknowledge the company Sedecal S.A. (Madrid, Spain) and the M + Visión Faculty for their support during this work.Publicad

    Triple-gated motion and blood pool clearance corrections improve reproducibility of coronary 18F-NaF PET

    Get PDF
    PurposeTo improve the test-retest reproducibility of coronary plaque 18F-sodium fluoride (18F-NaF) positron emission tomography (PET) uptake measurements.MethodsWe recruited 20 patients with coronary artery disease who underwent repeated hybrid PET/CT angiography (CTA) imaging within 3 weeks. All patients had 30-min PET acquisition and CTA during a single imaging session. Five PET image-sets with progressive motion correction were reconstructed: (i) a static dataset (no-MC), (ii) end-diastolic PET (standard), (iii) cardiac motion corrected (MC), (iv) combined cardiac and gross patient motion corrected (2 × MC) and, (v) cardiorespiratory and gross patient motion corrected (3 × MC). In addition to motion correction, all datasets were corrected for variations in the background activities which are introduced by variations in the injection-to-scan delays (background blood pool clearance correction, BC). Test-retest reproducibility of PET target-to-background ratio (TBR) was assessed by Bland-Altman analysis and coefficient of reproducibility.ResultsA total of 47 unique coronary lesions were identified on CTA. Motion correction in combination with BC improved the PET TBR test-retest reproducibility for all lesions (coefficient of reproducibility: standard = 0.437, no-MC = 0.345 (27% improvement), standard + BC = 0.365 (20% improvement), no-MC + BC = 0.341 (27% improvement), MC + BC = 0.288 (52% improvement), 2 × MC + BC = 0.278 (57% improvement) and 3 × C + BC = 0.254 (72% improvement), all p < 0.001). Importantly, in a sub-analysis of 18F-NaF-avid lesions with gross patient motion > 10 mm following corrections, reproducibility was improved by 133% (coefficient of reproducibility: standard = 0.745, 3 × MC = 0.320).ConclusionJoint corrections for cardiac, respiratory, and gross patient motion in combination with background blood pool corrections markedly improve test-retest reproducibility of coronary 18F-NaF PET

    Validation of a small-animal PET simulation using GAMOS: a Geant4-based framework

    Full text link
    onte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250–750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL−1 and the simulated peak was 247.1 kcps at 0.87 MBq mL−1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques

    A comparison of rotation- and blob-based system models for 3D SPECT with depth-dependent detector response

    Full text link
    We compare two different implementations of a 3D SPECT system model for iterative reconstruction, both of which compensate for non-uniform photon attenuation and depth-dependent system response. One implementation performs fast rotation of images represented using a basis of rectangular voxels, whereas the other represents images using a basis of rotationally symmetric volume elements. In our simulations the blob-based approach was found to slightly outperform the rotation-based one in terms of the bias-variance trade-off in the reconstructed images. Their difference can be significant, however, in terms of computational load. The rotation-based method is faster for many typical SPECT reconstruction problems, but the blob-based one can be better-suited to cases where the reconstruction algorithm needs to process one volume element at a time.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/48977/2/pmb4_11_003.pd

    Differences in topological progression profile among neurodegenerative diseases from imaging data

    Get PDF
    The spatial distribution of atrophy in neurodegenerative diseases suggests that brain connectivity mediates disease propagation. Different descriptors of the connectivity graph potentially relate to different underlying mechanisms of propagation. Previous approaches for evaluating the influence of connectivity on neurodegeneration consider each descriptor in isolation and match predictions against late-stage atrophy patterns. We introduce the notion of a topological profile - a characteristic combination of topological descriptors that best describes the propagation of pathology in a particular disease. By drawing on recent advances in disease progression modeling, we estimate topological profiles from the full course of pathology accumulation, at both cohort and individual levels. Experimental results comparing topological profiles for Alzheimer's disease, multiple sclerosis and normal ageing show that topological profiles explain the observed data better than single descriptors. Within each condition, most individual profiles cluster around the cohort-level profile, and individuals whose profiles align more closely with other cohort-level profiles show features of that cohort. The cohort-level profiles suggest new insights into the biological mechanisms underlying pathology propagation in each disease
    corecore