313 research outputs found

    Harmonization of brain PET images in multi-center PET studies using Hoffman phantom scan

    Get PDF
    Background: Image harmonization has been proposed to minimize heterogeneity in brain PET scans acquired in multi-center studies. However, standard validated methods and software tools are lacking. Here, we assessed the performance of a framework for the harmonization of brain PET scans in a multi-center European clinical trial. / Method: Hoffman 3D brain phantoms were acquired in 28 PET systems and reconstructed using site-specific settings. Full Width at Half Maximum (FWHM) of the Effective Image Resolution (EIR) and harmonization kernels were estimated for each scan. The target EIR was selected as the coarsest EIR in the imaging network. Using “Hoffman 3D brain Analysis tool,” indicators of image quality were calculated before and after the harmonization: The Coefficient of Variance (COV%), Gray Matter Recovery Coefficient (GMRC), Contrast, Cold-Spot RC, and left-to-right GMRC ratio. A COV% ≀ 15% and Contrast ≄ 2.2 were set as acceptance criteria. The procedure was repeated to achieve a 6-mm target EIR in a subset of scans. The method’s robustness against typical dose-calibrator-based errors was assessed. / Results: The EIR across systems ranged from 3.3 to 8.1 mm, and an EIR of 8 mm was selected as the target resolution. After harmonization, all scans met acceptable image quality criteria, while only 13 (39.4%) did before. The harmonization procedure resulted in lower inter-system variability indicators: Mean ± SD COV% (from 16.97 ± 6.03 to 7.86 ± 1.47%), GMRC Inter-Quartile Range (0.040–0.012), and Contrast SD (0.14–0.05). Similar results were obtained with a 6-mm FWHM target EIR. Errors of ± 10% in the DRO activity resulted in differences below 1 mm in the estimated EIR. / Conclusion: Harmonizing the EIR of brain PET scans significantly reduced image quality variability while minimally affecting quantitative accuracy. This method can be used prospectively for harmonizing scans to target sharper resolutions and is robust against dose-calibrator errors. Comparable image quality is attainable in brain PET multi-center studies while maintaining quantitative accuracy

    Differentiation of Metabolically Distinct Areas within Head and Neck Region using Dynamic 18F-FDG Positron Emission Tomography Imaging

    Get PDF
    Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.Siirretty Doriast

    Quantitative assessment of myelin density using [C-11]MeDAS PET in patients with multiple sclerosis:a first-in-human study

    Get PDF
    Purpose: Multiple sclerosis (MS) is a disease characterized by inflammatory demyelinated lesions. New treatment strategies are being developed to stimulate myelin repair. Quantitative myelin imaging could facilitate these developments. This first-in-man study aimed to evaluate [11C]MeDAS as a PET tracer for myelin imaging in humans. Methods: Six healthy controls and 11 MS patients underwent MRI and dynamic [11C]MeDAS PET scanning with arterial sampling. Lesion detection and classification were performed on MRI. [11C]MeDAS time-activity curves of brain regions and MS lesions were fitted with various compartment models for the identification of the best model to describe [11C]MeDAS kinetics. Several simplified methods were compared to the optimal compartment model. Results: Visual analysis of the fits of [11C]MeDAS time-activity curves showed no preference for irreversible (2T3k) or reversible (2T4k) two-tissue compartment model. Both volume of distribution and binding potential estimates showed a high degree of variability. As this was not the case for 2T3k-derived net influx rate (Ki), the 2T3k model was selected as the model of choice. Simplified methods, such as SUV and MLAIR2 correlated well with 2T3k-derived Ki, but SUV showed subject-dependent bias when compared to 2T3k. Both the 2T3k model and the simplified methods were able to differentiate not only between gray and white matter, but also between lesions with different myelin densities. Conclusion: [11C]MeDAS PET can be used for quantification of myelin density in MS patients and is able to distinguish differences in myelin density within MS lesions. The 2T3k model is the optimal compartment model and MLAIR2 is the best simplified method for quantification. Trial registration. NL7262. Registered 18 September 2018

    An Image Registration Method for Head CTA and MRA Images Using Mutual Information on Volumes of Interest

    Get PDF
    Image registration is an important and a fundamental task in computer vision and image processing field. For example, to make a surgical plan for head operation, the surgeons should gain more detailed information from CT angiography (CTA) and MR angiography (MRA) images. And the abnormalities can be easily detected from the fusion image which is obtained from two different modalities. One of the multiple modal image registration methods is matching the CTA and MRA, by which the image of head vascular could be enhanced. In general, the procedure for fusion is completed manually. It is time-consuming and subjective. Particularly the anatomical knowledge is required as well. Therefore, the development of automatic registration methods is expected in medical fields. In this paper, we propose a method for high accurate registration, which concentrates the structure of head vascular. We use 2-D projection images and restrict volume of interests to improve the processing affection. In experiments, we performed our proposed method for registration on five sets of CTA and MRA images and a better result from our previous method is obtained.SCIS&ISIS 2014 : Joint 7th International Conference on Soft Computing and Intelligent Systems and 15th International Symposium on Advanced Intelligent, December 3-6, 2014, Kitakyushu, Japa

    Direct inference of Patlak parametric images in whole-body PET/CT imaging using convolutional neural networks

    Get PDF
    Purpose: This study proposed and investigated the feasibility of estimating Patlak-derived influx rate constant (Ki) from standardized uptake value (SUV) and/or dynamic PET image series. Methods: Whole-body 18F-FDG dynamic PET images of 19 subjects consisting of 13 frames or passes were employed for training a residual deep learning model with SUV and/or dynamic series as input and Ki-Patlak (slope) images as output. The training and evaluation were performed using a nine-fold cross-validation scheme. Owing to the availability of SUV images acquired 60 min post-injection (20 min total acquisition time), the data sets used for the training of the models were split into two groups: “With SUV” and “Without SUV.” For “With SUV” group, the model was first trained using only SUV images and then the passes (starting from pass 13, the last pass, to pass 9) were added to the training of the model (one pass each time). For this group, 6 models were developed with input data consisting of SUV, SUV plus pass 13, SUV plus passes 13 and 12, SUV plus passes 13 to 11, SUV plus passes 13 to 10, and SUV plus passes 13 to 9. For the “Without SUV” group, the same trend was followed, but without using the SUV images (5 models were developed with input data of passes 13 to 9). For model performance evaluation, the mean absolute error (MAE), mean error (ME), mean relative absolute error (MRAE%), relative error (RE%), mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between the predicted Ki-Patlak images by the two groups and the reference Ki-Patlak images generated through Patlak analysis using the whole acquired data sets. For specific evaluation of the method, regions of interest (ROIs) were drawn on representative organs, including the lung, liver, brain, and heart and around the identified malignant lesions. Results: The MRAE%, RE%, PSNR, and SSIM indices across all patients were estimated as 7.45 ± 0.94%, 4.54 ± 2.93%, 46.89 ± 2.93, and 1.00 ± 6.7 × 10−7, respectively, for models predicted using SUV plus passes 13 to 9 as input. The predicted parameters using passes 13 to 11 as input exhibited almost similar results compared to the predicted models using SUV plus passes 13 to 9 as input. Yet, the bias was continuously reduced by adding passes until pass 11, after which the magnitude of error reduction was negligible. Hence, the predicted model with SUV plus passes 13 to 9 had the lowest quantification bias. Lesions invisible in one or both of SUV and Ki-Patlak images appeared similarly through visual inspection in the predicted images with tolerable bias. Conclusion: This study concluded the feasibility of direct deep learning-based approach to estimate Ki-Patlak parametric maps without requiring the input function and with a fewer number of passes. This would lead to shorter acquisition times for WB dynamic imaging with acceptable bias and comparable lesion detectability performance.</p

    Automatic generation of absolute myocardial blood flow images using [15O]H2O and a clinical PET/CT scanner

    Get PDF
    PURPOSE: Parametric imaging of absolute myocardial blood flow (MBF) using [(15)O]H(2)O enables determination of MBF with high spatial resolution. The aim of this study was to develop a method for generating reproducible, high-quality and quantitative parametric MBF images with minimal user intervention. METHODS: Nineteen patients referred for evaluation of MBF underwent rest and adenosine stress [(15)O]H(2)O positron emission tomography (PET) scans. Ascending aorta and right ventricular (RV) cavity volumes of interest (VOIs) were used as input functions. Implementation of a basis function method (BFM) of the single-tissue model with an additional correction for RV spillover was used to generate parametric images. The average segmental MBF derived from parametric images was compared with MBF obtained using nonlinear least-squares regression (NLR) of VOI data. Four segmentation algorithms were evaluated for automatic extraction of input functions. Segmental MBF obtained using these input functions was compared with MBF obtained using manually defined input functions. RESULTS: The average parametric MBF showed a high agreement with NLR-derived MBF [intraclass correlation coefficient (ICC) = 0.984]. For each segmentation algorithm there was at least one implementation that yielded high agreement (ICC > 0.9) with manually obtained input functions, although MBF calculated using each algorithm was at least 10% higher. Cluster analysis with six clusters yielded the highest agreement (ICC = 0.977), together with good segmentation reproducibility (coefficient of variation of MBF <5%). CONCLUSION: Parametric MBF images of diagnostic quality can be generated automatically using cluster analysis and a implementation of a BFM of the single-tissue model with additional RV spillover correction. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00259-011-1730-3) contains supplementary material, which is available to authorized users

    Image-Derived Input Function Derived from a Supervised Clustering Algorithm: Methodology and Validation in a Clinical Protocol Using [11C](R)-Rolipram

    Get PDF
    Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [11C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [11C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (VT/fP) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a VT/fP error of 10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [11C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-VT/fP in [11C](R)-rolipram PET clinical scans. This technique enables fully automated extraction of IDIF and can be applied to other radiotracers with similar kinetics.ope

    MRI-Based Attenuation Correction in Emission Computed Tomography

    Get PDF
    The hybridization of magnetic resonance imaging (MRI) with positron emission tomography (PET) or single photon emission computed tomography (SPECT) enables the collection of an assortment of biological data in spatial and temporal register. However, both PET and SPECT are subject to photon attenuation, a process that degrades image quality and precludes quantification. To correct for the effects of attenuation, the spatial distribution of linear attenuation coefficients (ÎŒ-coefficients) within and about the patient must be available. Unfortunately, extracting ÎŒ-coefficients from MRI is non-trivial. In this thesis, I explore the problem of MRI-based attenuation correction (AC) in emission tomography. In particular, I began by asking whether MRI-based AC would be more reliable in PET or in SPECT. To this end, I implemented an MRI-based AC algorithm relying on image segmentation and applied it to phantom and canine emission data. The subsequent analysis revealed that MRI-based AC performed better in SPECT than PET, which is interesting since AC is more challenging in SPECT than PET. Given this result, I endeavoured to improve MRI-based AC in PET. One problem that required addressing was that the lungs yield very little signal in MRI, making it difficult to infer their ÎŒ-coefficients. By using a pulse sequence capable of visualizing lung parenchyma, I established a linear relationship between MRI signal and the lungs’ ÎŒ-coefficients. I showed that applying this mapping on a voxel-by-voxel basis improved quantification in PET reconstructions compared to conventional MRI-based AC techniques. Finally, I envisaged that a framework for MRI-based AC methods would potentiate further improvements. Accordingly, I identified three ways an MRI can be converted to ÎŒ-coefficients: 1) segmentation, wherein the MRI is divided into tissue types and each is assigned an ÎŒ-coefficient, 2) registration, wherein a template of ÎŒ-coefficients is aligned with the MRI, and 3) mapping, wherein a function maps MRI voxels to ÎŒ-coefficients. I constructed an algorithm for each method and catalogued their strengths and weaknesses. I concluded that a combination of approaches is desirable for MRI-based AC. Specifically, segmentation is appropriate for air, fat, and water, mapping is appropriate for lung, and registration is appropriate for bone

    Segmentation of dual modality brain PET/CT images using the MAP-MRF model

    Get PDF
    Author name used in this publication: Michael FulhamAuthor name used in this publication: Dagan FengRefereed conference paper2008-2009 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe
    • 

    corecore