48 research outputs found

    RIGOROUS TASK-BASED OPTIMIZATION OF INSTRUMENTATION, ACQUISITION PARAMETERS AND RECONSTRUCTION METHODS FOR MYOCARDIAL PERFUSION SPECT

    Get PDF
    Coronary artery disease (CAD) is the most common type of heart disease and a major cause of death in the United States. Myocardial perfusion SPECT (MPS) is a well-established noninvasive diagnostic imaging technique for the detection and functional characterization of CAD. MPS involves intravenous injection of a radiopharmaceutical (e.g. Tc-99m sestamibi) followed by acquiring planar images of the 3-D distribution of the radioactive labeled agent, using one or more gamma cameras that are rotated around the patient, at different projection views. Transaxial reconstructed images are formed from these projections using tomographic image reconstruction methods. The quality of SPECT images is affected by instrumentation, acquisition parameters and reconstruction/compensation methods used. The overall goal of this dissertation was to perform rigorous optimization of MPS using task-based image quality assessment methods and metrics, in which image quality is evaluated based on the performance of an observer on diagnostic tasks relevant to MPS. In this work, we used three different model observers: the Ideal Observer (IO), and its extension, the Ideal Observer with Model Mismatch (IO-MM) and an anthropomorphic observer, the Channelized Hotelling Observer (CHO). The IO makes optimal use of the available information in the image data. However, due to its implicit perfect knowledge about the image formation process, using the IO to optimize imaging systems could lead to differences in optimal parameters compared to those optimized for humans (or CHO) interpreting images that are reconstructed with imperfect compensation for image-degrading factors. To address this, we developed the IO-MM that allows optimization of acquisition and instrumentation parameters in the absence of compensation or the presence of non-ideal compensation methods and evaluates them in terms of the IO. In order to perform clinically relevant optimization of MPS and due to radiation concerns that limit system evaluation using patient studies, we designed and developed a population of digital phantoms based on the 3-D eXtended CArdiac Torso (XCAT) phantom that provides an extremely realistic model of the human anatomy. To make the simulation of the population computationally feasible, we developed and used methods to efficiently simulate a database of Tc-99m and Tl-201 MPS projections using full Monte Carlo (MC) simulations. We used the phantom population and the projection database to optimize and evaluate the major acquisition and instrumentation parameters for MPS. An important acquisition parameter is the width of the acquisition energy window, which controls the tradeoff between scatter and noise. We used the IO, IO–MM and CHO to find the optimal acquisition energy window width and evaluate various scatter modeling and compensation methods, including the dual and triple energy window and the Effective Source Scatter Estimation (ESSE). Results indicated that the ESSE scatter estimation method provided very similar performance to the perfect scatter model implicit in the IO. Collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. We sought the optimal collimator with respect to the IO performance on two tasks related to MPS: binary detection and joint detection and localization. The results of this study suggested that higher sensitivity collimators than those currently used clinically appear optimal for both of the diagnostic tasks. In a different study, we evaluated and compared various CDR modeling and compensation methods using the IO (i.e. the observer implicitly used a true CDR model), IO-MM (using an approximate or no model of the CDR) and CHO, operating on images reconstructed using the same compensation methods. Results from the collimator and acquisition energy window optimization studies indicated that the IO-MM had good agreement with the CHO, in terms of the range of optimal Tc-99m acquisition energy window widths, optimal collimators, and the ranking of scatter and CDR compensation methods. The IO was in agreement with the CHO when model mismatch was small. Dual isotope simultaneous acquisition (DISA) rest Tl-201/stress Tc-99m MPS has the potential to provide reduced acquisition time, increased patient comfort, and perfectly registered images compared to separate acquisition protocols, the current clinical protocols of choice. However, crosstalk contamination, where photons emitted by one radionuclide contribute to the image of the other, degrades image quality. In this work, we optimized, compared and evaluated dual isotope MPS imaging with separate and simultaneous acquisition using the IO in the context of 3-class defect detection task. Optimal acquisition parameters were different for the two protocols. Results suggested that DISA methods, when used with accurate crosstalk compensation methods, could potentially provide image quality as good as that obtained with separate acquisition protocols

    TOWARDS FURTHER OPTIMIZATION OF RECONSTRUCTION METHODS FOR DUAL-RADIONUCLIDE MYOCARDIAL PERFUSION SPECT

    Get PDF
    Coronary artery disease (CAD) is the most prevalent type of heart disease and a leading cause of death both in the United States and worldwide. Myocardial perfusion SPECT (MPS) is a well-established and widely-used non-invasive imaging technique to diagnose CAD. MPS images the distribution of radioactive perfusion agent in the myocardium to assess the myocardial perfusion status at rest and stress state and allow diagnosis of CAD and allow differentiation of CAD and previous myocardial infarctions. The overall goal of this dissertation was to optimize the image reconstruction methods for MPS by patient-specific optimization of two advanced iterative reconstruction methods based on simulations of realistic patients population modeling existing hardware and previously optimized dual-isotope simultaneous-acquisition imaging protocols. After optimization, the two algorithms were compared to determine the optimal reconstruction methods for MPS. First, we developed a model observer strategy to evaluate image quality and allow optimization of the reconstruction methods using a population of phantoms modeling the variability seen in human populations. The Hotelling Observer (HO) is widely used to evaluate image quality, often in conjunction with anthropomorphic channels to model human observer performance. However, applying the HO to non- multivariate-normally (MVN) distributed, such as the output from a channel model applied to images with variable signals and background, is not optimal. In this work, we proposed a novel model observer strategy to evaluate the image quality of such data. First, the entire data ensemble is divided into sub-ensembles that are exactly or approximately MVN and homoscedastic. Next, the Linear Discriminant (LD) is applied to estimate test statistics for each sub-ensemble, and a single area under the receiver operating characteristics curve (AUC) is calculated using the pooled test statistics from all the sub-ensembles. The AUC serves as the figure of merit for performance on the defect detection task. The proposed multi-template LD was compared to other model observer strategies and was shown to be a practical, theoretically justified, and produced higher AUC values for non-MVN data such as that arising from the clinically-realistic SKS task used in the remainder of this work. We then optimized two regularized statistical reconstruction algorithms. One is the widely used post-filtered ordered subsets-expectation maximization (OS-EM) algorithm. The other is a maximum a posteriori (MAP) algorithm with dual-tracer prior (DTMAP) that was proposed for dual-isotope MPS study and was expected to outperform the post-filtered OS-EM algorithm. Of importance, we proposed to investigate patient-specific optimization of the reconstruction parameters. To accomplish this, the phantom population was divided into three anatomy groups based on metrics that expected to affect image noise and resolution and thus the optimal reconstruction parameters. In particular, these metrics were the distance from the center of the heart to the face of the collimator, which is directly related to image resolution, heart size, and counts from the myocardium, which is expected to determine image noise. Reconstruction parameters were optimized for each of these groups using the proposed model observer strategy. Parameters for the rest and stress images were optimized separately, and the parameters that achieve the highest AUC were deemed optimal. The results showed that the proposed group-wise optimization method offered slightly better task performance than using a single set of parameters for all the phantoms. For DTMAP, we also applied the group-wise optimization approach. The extra challenges for DTMAP optimization are that it has three parameters to be optimized simultaneously, and it is substantially more computationally expensive than OS-EM. Thus, we adopted optimization strategies to reduce the size of the parameter search space. In particular, we searched in two parameter ranges expected to give result in good image quality. We also reduced the computation burden by exploiting limiting behavior of the penalty function to reduce the number of parameters that need to be optimized. Despite this effort, the optimized DTMAP had poorer task performance compared to the optimized OS-EM algorithm. As a result, we studied the limitations of the DTMAP algorithm and suggest reasons of its worse performance for the task investigated. The results of this study indicate that there is benefit from patient-specific optimization. The methods and optimal patient-specific parameters may be applicable to clinical MPS studies. In addition, the model observer strategy and the group-wise optimization approach may also be applicable both to future work in MPS and to other relevant fields

    Coronary Angiography

    Get PDF
    In the intervening 10 years tremendous advances in the field of cardiac computed tomography have occurred. We now can legitimately claim that computed tomography angiography (CTA) of the coronary arteries is available. In the evaluation of patients with suspected coronary artery disease (CAD), many guidelines today consider CTA an alternative to stress testing. The use of CTA in primary prevention patients is more controversial in considering diagnostic test interpretation in populations with a low prevalence to disease. However the nuclear technique most frequently used by cardiologists is myocardial perfusion imaging (MPI). The combination of a nuclear camera with CTA allows for the attainment of coronary anatomic, cardiac function and MPI from one piece of equipment. PET/SPECT cameras can now assess perfusion, function, and metabolism. Assessing cardiac viability is now fairly routine with these enhancements to cardiac imaging. This issue is full of important information that every cardiologist needs to now

    [<sup>18</sup>F]fluorination of biorelevant arylboronic acid pinacol ester scaffolds synthesized by convergence techniques

    Get PDF
    Aim: The development of small molecules through convergent multicomponent reactions (MCR) has been boosted during the last decade due to the ability to synthesize, virtually without any side-products, numerous small drug-like molecules with several degrees of structural diversity.(1) The association of positron emission tomography (PET) labeling techniques in line with the “one-pot” development of biologically active compounds has the potential to become relevant not only for the evaluation and characterization of those MCR products through molecular imaging, but also to increase the library of radiotracers available. Therefore, since the [18F]fluorination of arylboronic acid pinacol ester derivatives tolerates electron-poor and electro-rich arenes and various functional groups,(2) the main goal of this research work was to achieve the 18F-radiolabeling of several different molecules synthesized through MCR. Materials and Methods: [18F]Fluorination of boronic acid pinacol esters was first extensively optimized using a benzaldehyde derivative in relation to the ideal amount of Cu(II) catalyst and precursor to be used, as well as the reaction solvent. Radiochemical conversion (RCC) yields were assessed by TLC-SG. The optimized radiolabeling conditions were subsequently applied to several structurally different MCR scaffolds comprising biologically relevant pharmacophores (e.g. β-lactam, morpholine, tetrazole, oxazole) that were synthesized to specifically contain a boronic acid pinacol ester group. Results: Radiolabeling with fluorine-18 was achieved with volumes (800 μl) and activities (≤ 2 GBq) compatible with most radiochemistry techniques and modules. In summary, an increase in the quantities of precursor or Cu(II) catalyst lead to higher conversion yields. An optimal amount of precursor (0.06 mmol) and Cu(OTf)2(py)4 (0.04 mmol) was defined for further reactions, with DMA being a preferential solvent over DMF. RCC yields from 15% to 76%, depending on the scaffold, were reproducibly achieved. Interestingly, it was noticed that the structure of the scaffolds, beyond the arylboronic acid, exerts some influence in the final RCC, with electron-withdrawing groups in the para position apparently enhancing the radiolabeling yield. Conclusion: The developed method with high RCC and reproducibility has the potential to be applied in line with MCR and also has a possibility to be incorporated in a later stage of this convergent “one-pot” synthesis strategy. Further studies are currently ongoing to apply this radiolabeling concept to fluorine-containing approved drugs whose boronic acid pinacol ester precursors can be synthesized through MCR (e.g. atorvastatin)

    Acceleration of GATE Monte Carlo simulations

    Get PDF
    Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography are forms of medical imaging that produce functional images that reflect biological processes. They are based on the tracer principle. A biologically active substance, a pharmaceutical, is selected so that its spatial and temporal distribution in the body reflects a certain body function or metabolism. In order to form images of the distribution, the pharmaceutical is labeled with gamma-ray-emitting or positron-emitting radionuclides (radiopharmaceuticals or tracers). After administration of the tracer to a patient, an external position-sensitive gamma-ray camera can detect the emitted radiation to form a stack of images of the radionuclide distribution after a reconstruction process. Monte Carlo methods are numerical methods that use random numbers to compute quantities of interest. This is normally done by creating a random variable whose expected value is the desired quantity. One then simulates and tabulates the random variable and uses its sample mean and variance to construct probabilistic estimates. It represents an attempt to model nature through direct simulation of the essential dynamics of the system in question. Monte Carlo modeling is the method of choice for all applications where measurements are not feasible or where analytic models are not available due to the complex nature of the problem. In addition, such modeling is a practical approach in nuclear medical imaging in several important application fields: detector design, quantification, correction methods for image degradations, detection tasks etc. Several powerful dedicated Monte Carlo simulators for PET and/or SPECT are available. However, they are often not detailed nor flexible enough to enable realistic simulations of emission tomography detector geometries while also modeling time dependent processes such as decay, tracer kinetics, patient and bed motion, dead time or detector orbits. Our Monte Carlo simulator of choice, GEANT4 Application for Tomographic Emission (GATE), was specifically designed to address all these issues. The flexibility of GATE comes at a price however. The simulation of a simple prototype SPECT detector may be feasible within hours in GATE but an acquisition with a realistic phantom may take years to complete on a single CPU. In this dissertation we therefore focus on the Achilles’ heel of GATE: efficiency. Acceleration of GATE simulations can only be achieved through a combination of efficient data analysis, dedicated variance reduction techniques, fast navigation algorithms and parallelization. In the first part of this dissertation we consider the improvement of the analysis capabilities of GATE. The static analysis module in GATE is both inflexible and incapable of storing more detail without introducing a large computational overhead. However, the design and validation of the acceleration techniques in this dissertation requires a flexible, detailed and computationally efficient analysis module. To this end, we develop a new analysis framework capable of analyzing any process, from the decay of isotopes to particle interactions and detections in any detector element for any type of phantom. The evaluation of our framework consists of the assessment of spurious activity in 124I-Bexxar PET and of contamination in 131I-Bexxar SPECT. In the case of PET we describe how our framework can detect spurious coincidences generated by non-pure isotopes, even with realistic phantoms. We show that optimized energy thresholds, which can readily be applied in the clinic, can now be derived in order to minimize the contamination. We also show that the spurious activity itself is not spatially uniform. Therefore standard reconstruction and correction techniques are not adequate. In the case of SPECT we describe how it is now possible to classify detections into geometric detections, phantom scatter, penetration through the collimator, collimator scatter and backscatter in the end parts. We show that standard correction algorithms such as triple energy window correction cannot correct for septal penetration. We demonstrate that 124I PET with optimized energy thresholds offer better image quality than 131I SPECT when using standard reconstruction techniques. In the second part of this dissertation we focus on improving the efficiency of GATE with a variance reduction technique called Geometrical Importance Sampling (GIS). We describe how only 0.02% of all emitted photons can reach the crystal surface of a SPECT detector head with a low energy high resolution collimator. A lot of computing power is therefore wasted by tracking photons that will not contribute to the result. A twofold strategy is used to solve this problem: GIS employs Russian Roulette to discard those photons that will not likely contribute to the result. Photons in more important regions on the other hand are split into several photons with reduced weight to increase their survival chance. We show that this technique introduces branches into the particle history. We describe how this can be taken into account by a particle history tree that is used for the analysis of the results. The evaluation of GIS consists of energy spectra validation, spatial resolution and sensitivity for low and medium energy isotopes. We show that GIS reaches acceleration factors between 5 and 13 over analog GATE simulations for the isotopes in the study. It is a general acceleration technique that can be used for any isotope, phantom and detector combination. Although GIS is useful as a safe and accurate acceleration technique, it cannot deliver clinically acceptable simulation times. The main reason lies in its inability to force photons in a specific direction. In the third part of this dissertation we solve this problem for 99mTc SPECT simulations. Our approach is twofold. Firstly, we introduce two variance reduction techniques: forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling (MPS). FD and CFD force copies of photons at decay and at every interaction point to be transported through the phantom in a direction sampled within a solid angle toward the SPECT detector head at all SPECT angles simultaneously. We describe how a weight must be assigned to each photon in order to compensate for the forced direction and non-absorption at emission and scatter. We show how the weights are calculated from the total and differential Compton and Rayleigh cross sections per electron with incorporation of Hubbell’s atomic form factor. In the case of FD all detector interactions are modeled by Monte Carlo, while in the case of CFD the detector is modeled analytically. Secondly, we describe the design of an FD and CFD specialized navigator to accelerate the slow tracking algorithms in GEANT4. The validation study shows that both FD and CFD closely match the analog GATE simulations and that we can obtain an acceleration factor between 3 (FD) and 6 (CFD) orders of magnitude over analog simulations. This allows for the simulation of a realistic acquisition with a torso phantom within 130 seconds. In the fourth part of this dissertation we exploit the intrinsic parallel nature of Monte Carlo simulations. We show how Monte Carlo simulations should scale linearly as a function of the number of processing nodes but that this is usually not achieved due to job setup time, output handling and cluster overhead. We describe how our approach is based on two steps: job distribution and output data handling. The job distribution is based on a time-domain partitioning scheme that retains all experimental parameters and that guarantees the statistical independence of each subsimulation. We also reduce the job setup time by the introduction of a parameterized collimator model for SPECT simulations. We reduce the data output handling time by a chain-based output merger. The scalability study is based on a set of simulations on a 70 CPU cluster and shows an acceleration factor of approximately 66 on 70 CPUs for both PET and SPECT.We also show that our method of parallelization does not introduce any approximations and that it can be readily combined with any of the previous acceleration techniques described above

    QUANTITATIVE NUCLEAR MEDICINE IMAGING USING ADVANCED IMAGE RECONSTRUCTION AND RADIOMICS

    Get PDF
    Our aim is to help put nuclear medicine at the forefront of quantitation on the path to the realization of personalized medicine. We propose and evaluate (Part I) advanced image reconstruction and (Part II) robust radiomics (large-scale data-oriented study of radiological images). The goal is to attain significantly improved diagnostic, prognostic and treatment-response assessment capabilities. Part I presents a new paradigm in point-spread function (PSF)-modeling, a partial volume correction method in PET imaging where resolution-degrading phenomena are modeled within the reconstruction framework. PSF-modeling improves resolution and enhances contrast, but significantly alters noise properties and induces edge-overshoots. Past efforts involve a dichotomy of PSF vs. no-PSF modeling; by contrast, we focus on a wide-spectrum of PSF models, including under- and over-estimation of the true PSF, for the potential of enhanced quantitation in standardized uptake values (SUVs). We show for the standard range of iterations employed in clinic (not excessive), edge enhancement due to overestimation actually lower SUV bias in small regions, while inter-voxel correlations suppress image roughness and enhance uniformity. An overestimated PSF yields improved contrast and limited edge-overshoot effects at lower iterations, enabling enhanced SUV quantitation. Overall, our framework provides an effective venue for quantitative task-based optimization. Part II proposes robust and reproducible radiomics methods. Radiomics workflows are complex, generating hundreds of features, which can lead to high variability and overfitting, and ultimately hampering performance. We developed and released a Standardized Environment for Radiomics Analysis (SERA) solution to enable robust radiomics analyses. We conduct studies on two unique imaging datasets – renal cell carcinoma SPECT and prostate cancer PET – identifying robust and reproducible radiomic features. In addition, we evaluate a novel hypothesis that radiomic features extracted from clinically normal (non-ischemic) myocardial perfusion SPECT (MPS) can predict coronary artery calcification (CAC; as extracted from CT). This has important implications, since CAC assessment is not commonly-performed nor reimbursed in wide community settings. SERA-derived radiomic features were utilized in a multi-step feature selection framework, followed by the application of machine learning to radiomic features. Our results show the potential to predict CAC from normal MPS, suggesting added usage and value for routine standard MPS

    2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease

    Get PDF
    The recommendations listed in this document are, whenever possible, evidence based. An extensive evidence review was conducted as the document was compiled through December 2008. Repeated literature searches were performed by the guideline development staff and writing committee members as new issues were considered. New clinical trials published in peer-reviewed journals and articles through December 2011 were also reviewed and incorporated when relevant. Furthermore, because of the extended development time period for this guideline, peer review comments indicated that the sections focused on imaging technologies required additional updating, which occurred during 2011. Therefore, the evidence review for the imaging sections includes published literature through December 2011
    corecore