26 research outputs found

    Validation of low-dose lung cancer PET-CT protocol and PET image improvement using machine learning

    Get PDF
    PURPOSE: To conduct a simplified lesion-detection task of a low-dose (LD) PET-CT protocol for frequent lung screening using 30% of the effective PETCT dose and to investigate the feasibility of increasing clinical value of low-statistics scans using machine learning. METHODS: We acquired 33 SD PET images, of which 13 had actual LD (ALD) PET, and simulated LD (SLD) PET images at seven different count levels from the SD PET scans. We employed image quality transfer (IQT), a machine learning algorithm that performs patch-regression to map parameters from low-quality to high-quality images. At each count level, patches extracted from 23 pairs of SD/SLD PET images were used to train three IQT models - global linear, single tree, and random forest regressions with cubic patch sizes of 3 and 5 voxels. The models were then used to estimate SD images from LD images at each count level for 10 unseen subjects. Lesion-detection task was carried out on matched lesion-present and lesion-absent images. RESULTS: LD PET-CT protocol yielded lesion detectability with sensitivity of 0.98 and specificity of 1. Random forest algorithm with cubic patch size of 5 allowed further 11.7% reduction in the effective PETCT dose without compromising lesion detectability, but underestimated SUV by 30%. CONCLUSION: LD PET-CT protocol was validated for lesion detection using ALD PET scans. Substantial image quality improvement or additional dose reduction while preserving clinical values can be achieved using machine learning methods though SUV quantification may be biased and adjustment of our research protocol is required for clinical use

    Deep learning for improving PET/CT attenuation correction by elastic registration of anatomical data.

    No full text
    For PET/CT, the CT transmission data are used to correct the PET emission data for attenuation. However, subject motion between the consecutive scans can cause problems for the PET reconstruction. A method to match the CT to the PET would reduce resulting artifacts in the reconstructed images. This work presents a deep learning technique for inter-modality, elastic registration of PET/CT images for improving PET attenuation correction (AC). The feasibility of the technique is demonstrated for two applications: general whole-body (WB) imaging and cardiac myocardial perfusion imaging (MPI), with a specific focus on respiratory and gross voluntary motion. A convolutional neural network (CNN) was developed and trained for the registration task, comprising two distinct modules: a feature extractor and a displacement vector field (DVF) regressor. It took as input a non-attenuation-corrected PET/CT image pair and returned the relative DVF between them-it was trained in a supervised fashion using simulated inter-image motion. The 3D motion fields produced by the network were used to resample the CT image volumes, elastically warping them to spatially match the corresponding PET distributions. Performance of the algorithm was evaluated in different independent sets of WB clinical subject data: for recovering deliberate misregistrations imposed in motion-free PET/CT pairs and for improving reconstruction artifacts in cases with actual subject motion. The efficacy of this technique is also demonstrated for improving PET AC in cardiac MPI applications. A single registration network was found to be capable of handling a variety of PET tracers. It demonstrated state-of-the-art performance in the PET/CT registration task and was able to significantly reduce the effects of simulated motion imposed in motion-free, clinical data. Registering the CT to the PET distribution was also found to reduce various types of AC artifacts in the reconstructed PET images of subjects with actual motion. In particular, liver uniformity was improved in the subjects with significant observable respiratory motion. For MPI, the proposed approach yielded advantages for correcting artifacts in myocardial activity quantification and potentially for reducing the rate of the associated diagnostic errors. This study demonstrated the feasibility of using deep learning for registering the anatomical image to improve AC in clinical PET/CT reconstruction. Most notably, this improved common respiratory artifacts occurring near the lung/liver border, misalignment artifacts due to gross voluntary motion, and quantification errors in cardiac PET imaging

    Quantitative accuracy and lesion detectability of low-dose 18F-FDG PET for lung cancer screening

    No full text
    10.2967/jnumed.116.177592Journal of Nuclear Medicine583399-405JNME
    corecore