16 research outputs found

    Quantitative PET image reconstruction employing nested expectation-maximization deconvolution for motion compensation

    Get PDF
    Bulk body motion may randomly occur during PET acquisitions introducing blurring, attenuation-emission mismatches and, in dynamic PET, discontinuities in the measured time activity curves between consecutive frames. Meanwhile, dynamic PET scans are longer, thus increasing the probability of bulk motion. In this study, we propose a streamlined 3D PET motion-compensated image reconstruction (3D-MCIR) framework, capable of robustly deconvolving intra-frame motion from a static or dynamic 3D sinogram. The presented 3D-MCIR methods need not partition the data into multiple gates, such as 4D MCIR algorithms, or access list-mode (LM) data, such as LM MCIR methods, both associated with increased computation or memory resources. The proposed algorithms can support compensation for any periodic and non-periodic motion, such as cardio-respiratory or bulk motion, the latter including rolling, twisting or drifting. Inspired from the widely adopted point-spread function (PSF) deconvolution 3D PET reconstruction techniques, here we introduce an image-based 3D generalized motion deconvolution method within the standard 3D maximum-likelihood expectation-maximization (ML-EM) reconstruction framework. In particular, we initially integrate a motion blurring kernel, accounting for every tracked motion within a frame, as an additional MLEM modeling component in the image space (integrated 3D-MCIR). Subsequently, we replaced the integrated model component with a nested iterative Richardson-Lucy (RL) image-based deconvolution method to accelerate the MLEM algorithm convergence rate (RL-3D-MCIR). The final method was evaluated with realistic simulations of whole-body dynamic PET data employing the XCAT phantom and real human bulk motion profiles, the latter estimated from volunteer dynamic MRI scans. In addition, metabolic uptake rate Ki parametric images were generated with the standard Patlak method. Our results demonstrate significant improvement in contrast-to-noise ratio (CNR) and noise-bias performance in both dynamic and parametric images. The proposed nested RL-3D-MCIR method is implemented on the Software for Tomographic Image Reconstruction (STIR) open-source platform and is scheduled for public release

    Overall Survival Prediction in Gliomas Using Region-Specific Radiomic Features

    No full text
    In this paper, we explored predictive performance of region-specific radiomic models for overall survival classification task in BraTS 2019 dataset. We independently trained three radiomic models: single-region model which included radiomic features from whole tumor (WT) region only, 3-subregions model which included radiomic features from non-enhancing tumor (NET), enhancing tumor (ET), and edema (ED) subregions, and 6-subregions model which included features from the left and right cerebral cortex, the left and right cerebral white matter, and the left and right lateral ventricle subregions. A 3-subregions radiomics model relied on a physiology-based subdivision of WT for each subject. A 6-subregions radiomics model relied on an anatomy-based segmentation of tumor-affected regions for each subject which is obtained by a diffeomorphic registration with the Harvard-Oxford subcortical atlas. For each radiomics model, a subset of most predictive features was selected by ElasticNetCV and used to train a Random Forest classifier. Our results showed that a 6-subregions radiomics model outperformed the 3-subregions and WT radiomic models on the BraTS 2019 training and validation datasets. A 6-subregions radiomics model achieved a classification accuracy of 47.1% on the training dataset and a classification accuracy of 55.2% on the validation dataset. Among the single subregion models, Edema radiomics model and Left Lateral Ventricle radiomics model yielded the highest classification accuracy on the training and validation datasets
    corecore