26 research outputs found

    Using domain knowledge for robust and generalizable deep learning-based CT-free PET attenuation and scatter correction.

    Get PDF
    Despite the potential of deep learning (DL)-based methods in substituting CT-based PET attenuation and scatter correction for CT-free PET imaging, a critical bottleneck is their limited capability in handling large heterogeneity of tracers and scanners of PET imaging. This study employs a simple way to integrate domain knowledge in DL for CT-free PET imaging. In contrast to conventional direct DL methods, we simplify the complex problem by a domain decomposition so that the learning of anatomy-dependent attenuation correction can be achieved robustly in a low-frequency domain while the original anatomy-independent high-frequency texture can be preserved during the processing. Even with the training from one tracer on one scanner, the effectiveness and robustness of our proposed approach are confirmed in tests of various external imaging tracers on different scanners. The robust, generalizable, and transparent DL development may enhance the potential of clinical translation

    A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose PET

    Get PDF
    PURPOSE: A critical bottleneck for the credibility of artificial intelligence (AI) is replicating the results in the diversity of clinical practice. We aimed to develop an AI that can be independently applied to recover high-quality imaging from low-dose scans on different scanners and tracers. METHODS: Brain [(18)F]FDG PET imaging of 237 patients scanned with one scanner was used for the development of AI technology. The developed algorithm was then tested on [(18)F]FDG PET images of 45 patients scanned with three different scanners, [(18)F]FET PET images of 18 patients scanned with two different scanners, as well as [(18)F]Florbetapir images of 10 patients. A conditional generative adversarial network (GAN) was customized for cross-scanner and cross-tracer optimization. Three nuclear medicine physicians independently assessed the utility of the results in a clinical setting. RESULTS: The improvement achieved by AI recovery significantly correlated with the baseline image quality indicated by structural similarity index measurement (SSIM) (r = −0.71, p < 0.05) and normalized dose acquisition (r = −0.60, p < 0.05). Our cross-scanner and cross-tracer AI methodology showed utility based on both physical and clinical image assessment (p < 0.05). CONCLUSION: The deep learning development for extensible application on unknown scanners and tracers may improve the trustworthiness and clinical acceptability of AI-based dose reduction. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-021-05644-1

    Clinical performance of long axial field of view PET/CT: a head-to-head intra-individual comparison of the Biograph Vision Quadra with the Biograph Vision PET/CT.

    Get PDF
    PURPOSE To investigate the performance of the new long axial field-of-view (LAFOV) Biograph Vision Quadra PET/CT and a standard axial field-of-view (SAFOV) Biograph Vision 600 PET/CT (both: Siemens Healthineers) system using an intra-patient comparison. METHODS Forty-four patients undergoing routine oncological PET/CT were prospectively included and underwent a same-day dual-scanning protocol following a single administration of either 18F-FDG (n = 20), 18F-PSMA-1007 (n = 16) or 68Ga-DOTA-TOC (n = 8). Half the patients first received a clinically routine examination on the SAFOV (FOVaxial 26.3 cm) in continuous bed motion and then immediately afterwards on the LAFOV system (10-min acquisition in list mode, FOVaxial 106 cm); the second half underwent scanning in the reverse order. Comparisons between the LAFOV at different emulated scan times (by rebinning list mode data) and the SAFOV were made for target lesion integral activity, signal to noise (SNR), target lesion to background ratio (TBR) and visual image quality. RESULTS Equivalent target lesion integral activity to the SAFOV acquisitions (16-min duration for a 106 cm FOV) were obtained on the LAFOV in 1.63 ± 0.19 min (mean ± standard error). Equivalent SNR was obtained by 1.82 ± 1.00 min LAFOV acquisitions. No statistically significant differences (p > 0.05) in TBR were observed even for 0.5 min LAFOV examinations. Subjective image quality rated by two physicians confirmed the 10 min LAFOV to be of the highest quality, with equivalence between the LAFOV and the SAFOV at 1.8 ± 0.85 min. By analogy, if the LAFOV scans were maintained at 10 min, proportional reductions in applied radiopharmaceutical could obtain equivalent lesion integral activity for activities under 40 MBq and equivalent doses for the PET component of <1 mSv. CONCLUSION Improved image quality, lesion quantification and SNR resulting from higher sensitivity were demonstrated for an LAFOV system in a head-to-head comparison under clinical conditions. The LAFOV system could deliver images of comparable quality and lesion quantification in under 2 min, compared to routine SAFOV acquisition (16 min for equivalent FOV coverage). Alternatively, the LAFOV system could allow for low-dose examination protocols. Shorter LAFOV acquisitions (0.5 min), while of lower visual quality and SNR, were of adequate quality with respect to target lesion identification, suggesting that ultra-fast or low-dose acquisitions can be acceptable in selected settings

    Development of a deep learning method for CT-free correction for an ultra-long axial field of view PET scanner.

    No full text
    INTRODUCTION The possibility of low-dose positron emission tomography (PET) imaging using high sensitivity long axial field of view (FOV) PET/computed tomography (CT) scanners makes CT a critical radiation burden in clinical applications. Artificial intelligence has shown the potential to generate PET images from non-corrected PET images. Our aim in this work is to develop a CT-free correction for a long axial FOV PET scanner. METHODS Whole body PET images of 165 patients scanned with a digital regular FOV PET scanner (Biograph Vision 600 (Siemens Healthineers) in Shanghai and Bern) was included for the development and testing of the deep learning methods. Furthermore, the developed algorithm was tested on data of 7 patients scanned with a long axial FOV scanner (Biograph Vision Quadra, Siemens Healthineers). A 2D generative adversarial network (GAN) was developed featuring a residual dense block, which enables the model to fully exploit hierarchical features from all network layers. The normalized root mean squared error (NRMSE) and peak signal-to-noise ratio (PSNR), were calculated to evaluate the results generated by deep learning. RESULTS The preliminary results showed that, the developed deep learning method achieved an average NRMSE of 0.4±0.3% and PSNR of 51.4±6.4 for the test on Biograph Vision, and an average NRMSE of 0.5±0.4% and PSNR of 47.9±9.4 for the validation on Biograph Vision Quadra, after applied transfer learning. CONCLUSION The developed deep learning method shows the potential for CT-free AI-correction for a long axial FOV PET scanner. Work in progress includes clinical assessment of PET images by independent nuclear medicine physicians. Training and fine-tuning with more datasets will be performed to further consolidate the development
    corecore