44 research outputs found

    An update on computational anthropomorphic anatomical models

    Get PDF
    The prevalent availability of high-performance computing coupled with validated computerized simulation platforms as open-source packages have motivated progress in the development of realistic anthropomorphic computational models of the human anatomy. The main application of these advanced tools focused on imaging physics and computational internal/external radiation dosimetry research. This paper provides an updated review of state-of-the-art developments and recent advances in the design of sophisticated computational models of the human anatomy with a particular focus on their use in radiation dosimetry calculations. The consolidation of flexible and realistic computational models with biological data and accurate radiation transport modeling tools enables the capability to produce dosimetric data reflecting actual setup in clinical setting. These simulation methodologies and results are helpful resources for the medical physics and medical imaging communities and are expected to impact the fields of medical imaging and dosimetry calculations profoundly.</p

    Predictive value of 99mTc-MAA-based dosimetry in personalized 90Y-SIRT planning for liver malignancies

    Get PDF
    [EN] Background: Selective internal radiation therapy with 90Y radioembolization aims to selectively irradiate liver tumours by administering radioactive microspheres under the theragnostic assumption that the pre-therapy injection of 99mTc labelled macroaggregated albumin (99mTc-MAA) provides an estimation of the 90Y microspheres biodistribution, which is not always the case. Due to the growing interest in theragnostic dosimetry for personalized radionuclide therapy, a robust relationship between the delivered and pre-treatment radiation absorbed doses is required. In this work, we aim to investigate the predictive value of absorbed dose metrics calculated from 99mTc-MAA (simulation) compared to those obtained from 90Y post-therapy SPECT/CT. Results: A total of 79 patients were analysed. Pre- and post-therapy 3D-voxel dosimetry was calculated on 99mTc-MAA and 90Y SPECT/CT, respectively, based on Local Deposition Method. Mean absorbed dose, tumour-to-normal ratio, and absorbed dose distribution in terms of dose-volume histogram (DVH) metrics were obtained and compared for each volume of interest (VOI). Mann-Whitney U-test and Pearson's correlation coefficient were used to assess the correlation between both methods. The effect of the tumoral liver volume on the absorbed dose metrics was also investigated. Strong correlation was found between simulation and therapy mean absorbed doses for all VOIs, although simulation tended to overestimate tumour absorbed doses by 26%. DVH metrics showed good correlation too, but significant differences were found for several metrics, mostly on non-tumoral liver. It was observed that the tumoral liver volume does not significantly affect the differences between simulation and therapy absorbed dose metrics. Conclusion: This study supports the strong correlation between absorbed dose metrics from simulation and therapy dosimetry based on 90Y SPECT/CT, highlighting the predictive ability of 99mTc-MAA, not only in terms of mean absorbed dose but also of the dose distributionEURATO

    Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement.

    Get PDF
    PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets

    Fully automated accurate patient positioning in computed tomography using anterior–posterior localizer images and a deep neural network:a dual-center study

    Get PDF
    Objectives: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. Methods: We included 5754 chest CT axial and anterior–posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). Results: The error in terms of BCAP was − 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and −0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value &lt; 0.01). Conclusion: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. Key Points: • Patient mis-centering in the anterior–posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.</p

    Fully automated accurate patient positioning in computed tomography using anterior–posterior localizer images and a deep neural network:a dual-center study

    Get PDF
    Objectives: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. Methods: We included 5754 chest CT axial and anterior–posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). Results: The error in terms of BCAP was − 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and −0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value &lt; 0.01). Conclusion: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. Key Points: • Patient mis-centering in the anterior–posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.</p

    Fully automated accurate patient positioning in computed tomography using anterior–posterior localizer images and a deep neural network:a dual-center study

    Get PDF
    Objectives: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. Methods: We included 5754 chest CT axial and anterior–posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). Results: The error in terms of BCAP was − 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and −0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value &lt; 0.01). Conclusion: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. Key Points: • Patient mis-centering in the anterior–posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.</p

    Development and validation of survival prognostic models for head and neck cancer patients using machine learning and dosiomics and CT radiomics features:a multicentric study

    Get PDF
    Background: This study aimed to investigate the value of clinical, radiomic features extracted from gross tumor volumes (GTVs) delineated on CT images, dose distributions (Dosiomics), and fusion of CT and dose distributions to predict outcomes in head and neck cancer (HNC) patients. Methods: A cohort of 240 HNC patients from five different centers was obtained from The Cancer Imaging Archive. Seven strategies, including four non-fusion (Clinical, CT, Dose, DualCT-Dose), and three fusion algorithms (latent low-rank representation referred (LLRR),Wavelet, weighted least square (WLS)) were applied. The fusion algorithms were used to fuse the pre-treatment CT images and 3-dimensional dose maps. Overall, 215 radiomics and Dosiomics features were extracted from the GTVs, alongside with seven clinical features incorporated. Five feature selection (FS) methods in combination with six machine learning (ML) models were implemented. The performance of the models was quantified using the concordance index (CI) in one-center-leave-out 5-fold cross-validation for overall survival (OS) prediction considering the time-to-event. Results: The mean CI and Kaplan-Meier curves were used for further comparisons. The CoxBoost ML model using the Minimal Depth (MD) FS method and the glmnet model using the Variable hunting (VH) FS method showed the best performance with CI = 0.73 ± 0.15 for features extracted from LLRR fused images. In addition, both glmnet-Cindex and Coxph-Cindex classifiers achieved a CI of 0.72 ± 0.14 by employing the dose images (+ incorporated clinical features) only. Conclusion: Our results demonstrated that clinical features, Dosiomics and fusion of dose and CT images by specific ML-FS models could predict the overall survival of HNC patients with acceptable accuracy. Besides, the performance of ML methods among the three different strategies was almost comparable.</p

    Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks

    No full text
    Objective: We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. Methods: The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. Results: The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. Conclusion: Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. Clinical relevance statement: We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. Key points: • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters.</p

    Global prevalence of Neospora caninum in rodents: A systematic review and meta‐analysis

    No full text
    Abstract Background Neosporosis has been considered a cause of abortion in dairy and beef cattle worldwide. Rodents are reservoir hosts for several infectious diseases. It is necessary to determine the prevalence of Neospora caninum in rodents to improve the current understanding of the transmission dynamics of Neospora as well as its life cycle and risk of transmission to livestock. Therefore, the objective of the present study was to estimate the pooled global prevalence of N. caninum in different rodent species. Methods Published studies on the prevalence of N. caninum in different rodent species were searched in the MEDLINE/PubMed, ScienceDirect, Web of Science, Scopus and Google Scholar and the reference lists of the retrieved articles until July 30, 2022. The eligible studies were selected using inclusion and exclusion criteria. The extracted data were verified and analysed using the random‐effect meta‐analysis. Result For this meta‐analysis, a total of 4372 rodents from 26 eligible studies were included. The global prevalence of N. caninum in rodents was estimated at 5% (95% CI 2%–9%), with the highest prevalence in Asia (12%; 95% CI 6%–24%) and lowest prevalence in America (3%; 95% CI 1%–14%) and Europe (3%; 95% CI 1%–6%). N. caninum was more prevalent in females (4%; 95% CI 2%–9%) than in males (3%; 95% CI 1%–11%). The most common diagnostic test was polymerase chain reaction (PCR) (21 studies). The pooled prevalence of N. caninum in rodents based on the diagnostic method was as follows: immunohistochemistry: 11% (95% CI 6%–20%), NAT: 5% (95% CI 4%–7%), IFAT: 5% (95% CI 2%–13%) and PCR: 3% (95% CI 1%–9%). Conclusion The results of this study showed a relatively low but widespread prevalence of N. caninum infection in rodents

    Fully automated accurate patient positioning in computed tomography using anterior–posterior localizer images and a deep neural network: a dual-center study

    No full text
    Objectives: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. Methods: We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). Results: The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value &lt; 0.01). Conclusion: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. Key points: • Patient mis-centering in the anterior–posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose . • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera . • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool .</p
    corecore