16 research outputs found

    Evaluation of importance estimators in deep learning classifiers for Computed Tomography

    Full text link
    Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.Comment: 4th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems (EXTRAAMAS 2022) - International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS

    Βελτιστοποίηση διαγνωστικών, θεραπευτικών και δοσιμετρικών πρωτοκόλλων στην πυρηνική ιατρική με την ανάπτυξη υπολογιστικών μοντέλων και τη χρήση Monte Carlo προσομοιώσεων

    No full text
    In the present thesis entitled “Evaluation of Diagnostic, Therapeutic and Dosimetric Applications in Nuclear Medicine, with the Development of Computational Models and the Use of Monte Carlo Simulations” the state of art techniques applied in Nuclear Medicine field are investigated. By exploiting modern tools such as Monte Carlo simulations anthropomorphic computational models and high performance computing clusters, as well as using clinical data, we are assessing several parameters for the optimization and the evaluation of the applied clinical protocols in Nuclear Medicine. Initially, we have performed an extensive literature research for all the topics which are investigated in this thesis. More specifically, the bibliographic investigation includes the history and the description of the emission based imaging techniques (single photon and positron emission - SPECT and PET), as well as the state of the art on the most recent approaches for the creation of realistic Monte Carlo imaging databases. The methods used in modeling clinical and preclinical detection systems are also reviewed. Alongside, the evolution of anthropomorphic computational models development, used in Nuclear Medicine field, is presented. The literature investigation is completed with the study of the dosimetric protocols that are used in clinical practice. There is a detailed analysis and historical review on the calculation of dose point kernels (DPKs) that are commonly used in the diagnostic and therapeutic dosimetry. The theoretical background on dosimetry is completed with the study of the personalized dosimetric factors, such as the calculation of S-values. In the present thesis, the evaluation of the clinical protocols is performed in two main directions. Based on the Monte Carlo simulations “ground truth” we try to optimize: diagnostic / imaging techniques and dosimetric / therapeutic protocols. Analytically, these two main axis are described in Chapters 2 and 3, respectively. In Chapter 2 the description and the results of the modeled imaging systems are presented. Moreover, a comparison study of the realistic intra-tumor heterogeneity PET modeling is performed, based on real clinical data. Finally, a full description of the created simulated imaging database is given, including clinical and preclinical PET and SPECT data. In Chapter 3, we provide for the first time the entire calculation procedure of the DPKs, for three different materials (soft tissue/water, lung and bone) for a set of 13 commonly used radioisotopes. The DPKs were extracted using the GATE Monte Carlo toolkit, which was initially validated for the dosimetry calculation using monoenergetic i) photon (γ), ii) electron (e-) and iii) betta (b-) particles. We continue our internal dose assessment study by calculating the S-values in preclinical applications. The S-values were calculated using whole body bioditributions as a source, while the procedure was validated with previously published data. Accordingly, the whole-body (heterogeneous source) S-values were extracted for the optimization of pediatric nuclear medical applications, so as to accurate calculate the absorbed dose per critical organ of interest. The biodistributions used in the pediatric studies were based on clinical data. The thesis is completed with the discussion and the analysis of the results obtained in each separate section. Future steps are suggested, so as to better exploit the presented results towards their application in clinical practice.Στην παρούσα διδακτορική διατριβή με τίτλο «Βελτιστοποίηση διαγνωστικών, θεραπευτικών και δοσιμετρικών πρωτοκόλλων στην Πυρηνική Ιατρική με την ανάπτυξη υπολογιστικών μοντέλων και τη χρήση Monte Carlo προσομοιώσεων» επιχειρείται μία καινοτόμος προσέγγιση στις σύγχρονες διαγνωστικές και θεραπευτικές μεθόδους που της Πυρηνικής Ιατρικής. Με τη χρήση σύγχρονων εργαλείων όπως είναι οι προσομοιώσεις Monte Carlo, τα ανθρωπόμορφα υπολογιστικά ομοιώματα και συστοιχίες υπολογιστών (cluster), σεσυνδυασμό και με την αξιοποίηση κλινικών δεδομένων, πραγματοποιήθηκε μελέτη για την ανάπτυξη και πιστοποίηση μεθόδων και τεχνικών, οι οποίες βελτιστοποιούν τα εφαρμοζόμενα πρωτόκολλα Πυρηνικής Ιατρικής. Αρχικά πραγματοποιήθηκε εκτενής βιβλιογραφική αναζήτηση στα επιμέρους θέματα που διαπραγματεύονται στη διατριβή. Πιο συγκεκριμένα η βιβλιογραφική μελέτη περιλαμβάνει το ιστορικό και την περιγραφή των τεχνικών απεικόνισης Μονοφωτονικής και Ποζιτρονικής υπολογιστικής τομογραφίας (SPECT και PET), καθώς και την παράθεση των πιο σύγχρονων τεχνικών, που χρησιμοποιούνται για την υλοποίηση ρεαλιστικών προσομοιώσεων με σκοπό τη δημιουργία απεικονιστικών βάσεων δεδομένων, αλλά και τις μεθόδους που χρησιμοποιούνται για τη μοντελοποίηση κλινικών και προκλινικών συστημάτων ανίχνευσης ακτινοβολίας σε περιβάλλον προσομοίωσης. Παράλληλα μελετήθηκε η ιστορική εξέλιξη των διαφόρων ανθρωπόμορφων υπολογιστικών ομοιωμάτων που χρησιμοποιούνται σε εφαρμογές Πυρηνικής Ιατρικής. Η βιβλιογραφική αναζήτηση ολοκληρώνεται με τη μελέτη των δοσιμετρικών πρωτοκόλλων που χρησιμοποιούνται και εφαρμόζονται στην κλινική πράξη. Γίνεται λεπτομερής ανάλυση και ιστορική αναδρομή στον υπολογισμό “σημειακών πυρήνων δόσης” (Dose Point Kernels - DPKs), οι οποίοι εφαρμόζονται στη δοσιμετρία διαγνωστικών αλλά και θεραπευτικών πρωτοκόλλων. Το θεωρητικό υπόβαθρο της δοσιμετρίας συμπληρώνεται με την ανάλυση και την παράθεση προσωποποιημένων δοσιμετρικών παραμέτρων όπως είναι η μέθοδος υπολογισμού των S-values. Στην παρούσα μελέτη πραγματοποιείται διεξοδική ανάλυση για τη βελτιστοποίηση των κλινικών πρωτοκόλλων που εφαρμόζονται στην κλινική πράξη σε δυο βασικούς άξονες. Σε επίπεδο προσομοιώσεων γίνεται προσπάθεια για τη βελτιστοποίηση: διαγνωστικών / απεικονιστικών τεχνικών και δοσιμετρικών / θεραπευτικών προσεγγίσεων. Πιο αναλυτικά οι δυο άξονες που μελετήθηκαν παρατίθενται στα κεφάλαια 2 και 3 αντίστοιχα. Στο 2ο κεφάλαιο αρχικά παρατίθενται οι περιγραφές και τα αποτελέσματα της μοντελοποίησης των απεικονιστικών συστημάτων που χρησιμοποιήθηκαν στην παρούσα διατριβή. Στη συνέχεια γίνεται αναλυτική περιγραφή και συγκριτική μελέτη για τη ρεαλιστική μοντελοποίηση της ετερογένειας καρκινικών όγκων σε προσομοιώσεις PET, οι οποίες βασίζονται σε πραγματικά κλινικά δεδομένα. Τέλος, παρατίθεται η περιγραφή μιας συνολικής βάσης προσομοιωμένων δεδομένων που συμπεριλαμβάνει φυσιολογικά, καθώς και δεδομένα με ασθένειες απεικονίσεων PET/SPECT τόσο σε προκλινικό όσο και σε κλινικό επίπεδο. Στο 3ο κεφάλαιο αρχικά πραγματοποιείται, για πρώτη φορά, ο υπολογισμός των DPKs σε τρία διαφορετικά μέσα (μαλακός ιστός / νερό, πνεύμονας, οστό) για μία λίστα 13 ραδιοϊσοτόπων που χρησιμοποιούνται στην Πυρηνική Ιατρική. Ο υπολογισμός των DPKs πραγματοποιήθηκε με το πρόγραμμα προσομοιώσεων GATE, το οποίο και πιστοποιήθηκε σε επίπεδο δοσιμετρίας φάσματος καθώς και μονοενεργειακών i) φωτονίων (γ), ii) ηλεκτρονίων (e-) και iii) σωματιδίων β. Στη συνέχεια, έγινε μελέτη για την εξαγωγή των S-values σε προκλινικό επίπεδο, όπου και πραγματοποιήθηκε συγκριτική αξιολόγηση των αποτελεσμάτων μας με τη βιβλιογραφία. Παράλληλα εξήχθησαν S-values με συγκεκριμένες βιοκατανομές για τον ακριβή υπολογισμό της απορροφούμενης δόσης ανά όργανο κατά την απεικόνιση μυών με SPECT και ΡΕΤ. Το κεφάλαιο ολοκληρώνεται με το πέρασμα από το προκλινικό στο κλινικό επίπεδο, καθορίζοντας τη διαδικασία υπολογισμού των S-values για παιδιατρικές εφαρμογές κάνοντας χρήση κλινικών απεικονιστικών δεδομένων. Στη συγκεκριμένη παράγραφο γίνεται παράθεση των αρχικών αποτελεσμάτων που υπολογίστηκαν σε παιδιατρικά ομοιώματα. Η διδακτορική διατριβή ολοκληρώνεται με τη συζήτηση και την ανάλυση των αποτελεσμάτων στο 4ο και τελευταίο κεφάλαιο, όπου και παρατίθενται εφαρμογές και μελλοντικά βήματα που κρίνεται αναγκαίο να πραγματοποιηθούν για την περαιτέρω βελτιστοποίηση των πρωτοκόλλων Πυρηνικής Ιατρικής στην κλινική πράξη

    Development and evaluation of QSPECT open-source software for the iterative reconstruction of SPECT images

    No full text
    Objective In this study open-source software (QSPECT) suitable for the iterative reconstruction of single-photon emission computed tomography (SPECT) data is presented. QSPECT implements maximum likelihood expectation maximization and ordered subsets expectation maximization algorithms in a user-friendly graphical interface. The software functionality is described and validation results are presented. Methods Maximum likelihood expectation maximization and ordered subsets expectation maximization algorithms are implemented in C++. The Qt toolkit, a standard C++ framework for developing high-performance cross-platform applications, has been used for the graphical user interface development. QSPECT is tested using original projection data from two clinical SPECT systems: (i) APEX SPX-6/6HR and (ii) Millennium MG. Phantom experiments were carried out to evaluate the quality of reconstructed images in terms of (i) spatial resolution, (ii) sensitivity to activity variations, and (iii) the presence of scatter media. A cardiac phantom was used to simulate a normal and abnormal scenario. Finally, clinical cardiac SPECT images were reconstructed. In all cases, QSPECT results were compared with the clinical systems reconstruction software that uses the standard filtered backprojection algorithm. Results The reconstructed images show that QSPECT, when compared with standard clinical reconstruction, provides images with higher contrast, reduced background, and better separation of small sources located in small distances. In addition, reconstruction with QSPECT provides more quantitative images, and reduces the background created by scatter media. Finally, the phantom and clinical cardiac images are reconstructed with similar quality. Conclusion QSPECT is a freely distributed, open-source standalone application that provides real-time, high-quality SPECT images. The software can be further modified to improve reconstruction algorithms, and include more correction techniques, such as, scatter and attenuation correction. Nucl Med Commun 31: 558-566 (C) 2010 Wolters Kluwer Health | Lippincott Williams & Wilkins

    Ionizing Radiation and Complex DNA Damage: Quantifying the Radiobiological Damage Using Monte Carlo Simulations

    No full text
    Ionizing radiation is a common tool in medical procedures. Monte Carlo (MC) techniques are widely used when dosimetry is the matter of investigation. The scientific community has invested, over the last 20 years, a lot of effort into improving the knowledge of radiation biology. The present article aims to summarize the understanding of the field of DNA damage response (DDR) to ionizing radiation by providing an overview on MC simulation studies that try to explain several aspects of radiation biology. The need for accurate techniques for the quantification of DNA damage is crucial, as it becomes a clinical need to evaluate the outcome of various applications including both low- and high-energy radiation medical procedures. Understanding DNA repair processes would improve radiation therapy procedures. Monte Carlo simulations are a promising tool in radiobiology studies, as there are clear prospects for more advanced tools that could be used in multidisciplinary studies, in the fields of physics, medicine, biology and chemistry. Still, lot of effort is needed to evolve MC simulation tools and apply them in multiscale studies starting from small DNA segments and reaching a population of cells

    Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization

    No full text
    International audienceOver the last decade there has been an extensive evolution in the Artificial Intelligence (AI) field. Modern radiation oncology is based on the exploitation of advanced computational methods aiming to personalization and high diagnostic and therapeutic precision. The quantity of the available imaging data and the increased developments of Machine Learning (ML), particularly Deep Learning (DL), triggered the research on uncovering "hidden" biomarkers and quantitative features from anatomical and functional medical images. Deep Neural Networks (DNN) have achieved outstanding performance and broad implementation in image processing tasks. Lately, DNNs have been considered for radiomics and their potentials for explainable AI (XAI) may help classification and prediction in clinical practice. However, most of them are using limited datasets and lack generalized applicability. In this study we review the basics of radiomics feature extraction, DNNs in image analysis, and major interpretability methods that help enable explainable AI. Furthermore, we discuss the crucial requirement of multicenter recruitment of large datasets, increasing the biomarkers variability, so as to establish the potential clinical value of radiomics and the development of robust explainable AI models

    Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models : creation of an oncology database

    No full text
    International audienceThe GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties

    A radiomic‐ and dosiomic‐based machine learning regression model for pretreatment planning in 177 Lu‐DOTATATE therapy

    No full text
    International audienceAbstract Background Standardized patient‐specific pretreatment dosimetry planning is mandatory in the modern era of nuclear molecular radiotherapy, which may eventually lead to improvements in the final therapeutic outcome. Only a comprehensive definition of a dosage therapeutic window encompassing the range of absorbed doses, that is, helpful without being detrimental can lead to therapy individualization and improved outcomes. As a result, setting absorbed dose safety limits for organs at risk (OARs) requires knowledge of the absorbed dose–effect relationship. Data sets of consistent and reliable inter‐center dosimetry findings are required to characterize this relationship. Purpose We developed and standardized a new pretreatment planning model consisting of a predictive dosimetry procedure for OARs in patients with neuroendocrine tumors (NETs) treated with 177 Lu‐DOTATATE (Lutathera). In the retrospective study described herein, we used machine learning (ML) regression algorithms to predict absorbed doses in OARs by exploiting a combination of radiomic and dosiomic features extracted from patients’ imaging data. Methods Pretreatment and posttreatment data for 20 patients with NETs treated with 177 Lu‐DOTATATE were collected from two clinical centers. A total of 3412 radiomic and dosiomic features were extracted from the patients’ computed tomography (CT) scans and dose maps, respectively. All dose maps were generated using Monte Carlo simulations. An ML regression model was designed based on ML algorithms for predicting the absorbed dose in every OAR (liver, left kidney, right kidney, and spleen) before and after the therapy and between each therapy session, thus predicting any possible radiotoxic effects. Results We evaluated nine ML regression algorithms. Our predictive model achieved a mean absolute dose error (MAE, in Gy) of 0.61 for the liver, 1.58 for the spleen, 1.30 for the left kidney, and 1.35 for the right kidney between pretherapy 68 Ga‐DOTATOC positron emission tomography (PET)/CT and posttherapy 177 Lu‐DOTATATE single photon emission (SPECT)/CT scans. Τhe best predictive performance observed was based on the gradient boost for the liver, the left kidney and the right kidney, and on the extra tree regressor for the spleen. Evaluation of the model's performance according to its ability to predict the absorbed dose in each OAR in every possible combination of pretherapy 68 Ga‐DOTATOC PET/CT and any posttherapy 177 Lu‐DOTATATE treatment cycle SPECT/CT scans as well as any 177 Lu‐DOTATATE SPECT/CT treatment cycle and the consequent 177 Lu‐DOTATATE SPECT/CT treatment cycle revealed mean absorbed dose differences ranges from −0.55 to 0.68 Gy. Incorporating radiodosiomics features from the 68 Ga‐DOTATOC PET/CT and first 177 Lu‐DOTATATE SPECT/CT treatment cycle scans further improved the precision and minimized the standard deviation of the predictions in nine out of 12 instances. An average improvement of 57.34% was observed (range: 17.53%–96.12%). However, it's important to note that in three instances (i.e., Ga,C.1 → C3 in spleen and left kidney, and Ga,C.1 → C2 in right kidney) we did not observe an improvement (absolute differences of 0.17, 0.08, and 0.05 Gy, respectively). Wavelet‐based features proved to have high correlated predictive value, whereas non‐linear‐based ML regression algorithms proved to be more capable than the linear‐based of producing precise prediction in our case. Conclusions The combination of radiomics and dosiomics has potential utility for personalized molecular radiotherapy (PMR) response evaluation and OAR dose prediction. These radiodosiomic features can potentially provide information on any possible disease recurrence and may be highly useful in clinical decision‐making, especially regarding dose escalation issues

    The OpenGATE ecosystem for Monte Carlo simulation in medical physics

    No full text
    International audienceAbstract This paper reviews the ecosystem of GATE, an open-source Monte Carlo toolkit for medical physics. Based on the shoulders of Geant4, the principal modules (geometry, physics, scorers) are described with brief descriptions of some key concepts (Volume, Actors, Digitizer). The main source code repositories are detailed together with the automated compilation and tests processes (Continuous Integration). We then described how the OpenGATE collaboration managed the collaborative development of about one hundred developers during almost 20 years. The impact of GATE on medical physics and cancer research is then summarized, and examples of a few key applications are given. Finally, future development perspectives are indicated.</jats:p
    corecore