5 research outputs found

    Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights

    Get PDF
    We propose and evaluate a procedure for the explainability of a breast density deep learning based classifier. A total of 1662 mammography exams labeled according to the BI-RADS categories of breast density was used. We built a residual Convolutional Neural Network, trained it and studied the responses of the model to input changes, such as different distributions of class labels in training and test sets and suitable image pre-processing. The aim was to identify the steps of the analysis with a relevant impact on the classifier performance and on the model explainability. We used the grad-CAM algorithm for CNN to produce saliency maps and computed the Spearman's rank correlation between input images and saliency maps as a measure of explanation accuracy. We found that pre-processing is critical not only for accuracy, precision and recall of a model but also to have a reasonable explanation of the model itself. Our CNN reaches good performances compared to the state-of-art and it considers the dense pattern to make the classification. Saliency maps strongly correlate with the dense pattern. This work is a starting point towards the implementation of a standard framework to evaluate both CNN performances and the explainability of their predictions in medical image classification problems

    Localization of anatomical changes in patients during proton therapy with in-beam PET monitoring: a voxel-based morphometry approach exploiting Monte Carlo simulations

    Get PDF
    Purpose: In-beam positron emission tomography (PET) is one of the modalities that can be used for in vivo noninvasive treatment monitoring in proton therapy. Although PET monitoring has been frequently applied for this purpose, there is still no straightforward method to translate the information obtained from the PET images into easy-to-interpret information for clinical personnel. The purpose of this work is to propose a statistical method for analyzing in-beam PET monitoring images that can be used to locate, quantify, and visualize regions with possible morphological changes occurring over the course of treatment. Methods: We selected a patient treated for squamous cell carcinoma (SCC) with proton therapy, to perform multiple Monte Carlo (MC) simulations of the expected PET signal at the start of treatment, and to study how the PET signal may change along the treatment course due to morphological changes. We performed voxel-wise two-tailed statistical tests of the simulated PET images, resembling the voxel-based morphometry (VBM) method commonly used in neuroimaging data analysis, to locate regions with significant morphological changes and to quantify the change. Results: The VBM resembling method has been successfully applied to the simulated in-beam PET images, despite the fact that such images suffer from image artifacts and limited statistics. Three dimensional probability maps were obtained, that allowed to identify interfractional morphological changes and to visualize them superimposed on the computed tomography (CT) scan. In particular, the characteristic color patterns resulting from the two-tailed statistical tests lend themselves to trigger alarms in case of morphological changes along the course of treatment. Conclusions: The statistical method presented in this work is a promising method to apply to PET monitoring data to reveal interfractional morphological changes in patients, occurring over the course of treatment. Based on simulated in-beam PET treatment monitoring images, we showed that with our method it was possible to correctly identify the regions that changed. Moreover we could quantify the changes, and visualize them superimposed on the CT scan. The proposed method can possibly help clinical personnel in the replanning procedure in adaptive proton therapy treatments

    In-vivo range verification analysis with in-beam PET data for patients treated with proton therapy at CNAO

    Get PDF
    Morphological changes that may arise through a treatment course are probably one of the most significant sources of range uncertainty in proton therapy. Non-invasive in-vivo treatment monitoring is useful to increase treatment quality. The INSIDE in-beam Positron Emission Tomography (PET) scanner performs in-vivo range monitoring in proton and carbon therapy treatments at the National Center of Oncological Hadrontherapy (CNAO). It is currently in a clinical trial (ID: NCT03662373) and has acquired in-beam PET data during the treatment of various patients. In this work we analyze the in-beam PET (IB-PET) data of eight patients treated with proton therapy at CNAO. The goal of the analysis is twofold. First, we assess the level of experimental fluctuations in inter-fractional range differences (sensitivity) of the INSIDE PET system by studying patients without morphological changes. Second, we use the obtained results to see whether we can observe anomalously large range variations in patients where morphological changes have occurred. The sensitivity of the INSIDE IB-PET scanner was quantified as the standard deviation of the range difference distributions observed for six patients that did not show morphological changes. Inter-fractional range variations with respect to a reference distribution were estimated using the Most-Likely-Shift (MLS) method. To establish the efficacy of this method, we made a comparison with the Beam's Eye View (BEV) method. For patients showing no morphological changes in the control CT the average range variation standard deviation was found to be 2.5 mm with the MLS method and 2.3 mm with the BEV method. On the other hand, for patients where some small anatomical changes occurred, we found larger standard deviation values. In these patients we evaluated where anomalous range differences were found and compared them with the CT. We found that the identified regions were mostly in agreement with the morphological changes seen in the CT scan

    A generative adversarial network approach for the attenuation correction in PET-MR hybrid imaging

    No full text
    Positron emission tomography (PET) provides functional images useful to track metabolic processes in the body and enables the diagnosis of several diseases. The technique is based on the use of radiotracers that emit positrons whose annihilation with electrons in the human body produces photons which travel away in almost anti-parallel directions. A ring of detectors is used to detect them and an event is counted when two detectors are activated within a time window (coincidence window). Each couple of detectors defines a line of response (LOR) to which events are associated. After the scan, a reconstruction algorithm transforms the acquired data into a map of activity in the patient’s body. Photons do not travel in vacuum but in human body, thus a correction for their attenuation is required. PET images are characterized by limited spatial resolution. In order to get morphological details to combine to functional ones, PET-CT (PET and computed tomography) and PET-MR (PET and magnetic resonance) systems have been developed. Linear attenuation coefficient maps are obtainable directly from the CT scan in the case of PET-CT by means of an accurate energy rescaling to 511 keV. Unfortunately, there is no straightforward technique to be used in PET-MR to derive the attenuation properties of tissues from MR signals. Plenty of techniques have been developed to address such kind of problem and in this work we explore an original approach based on deep neural networks. These could provide a boost in the direction of a data-driven algorithm for attenuation correction by using structural, T 1 weighted, MR images transformed into pseudo-CTs, i.e. images whose intensity values are similar to the ones expected in a CT image. Already implemented deep learning techniques to this purpose require paired data. Unfortunately, it is quite hard to obtain a big dataset of paired medical images, i.e. MR and CT images belonging to the same patient. To overcome this limitation, we chose to develop an approach based on a Generative Adversarial Network (GAN) trained on unpaired data. A GAN is a deep learning architecture composed by two neural networks, a generator and a discriminator, fighting against each other: the generator tries to map the input to the desired output and the discriminator tells if the generated output is good or not. In the training phase, the generator has to maximize the similarity to the desired output and the score provided by discriminator; the discriminator instead has to distinguish between the fakes, produced by the generator, and the original data. After the training phase, the generator is capable of mapping any point in the input space (MR images) to a point in the output space (pseudo-CT images). The generation of pseudo-CTs from MRs with an unpaired training set in the case here proposed has been approached by using a CycleGAN (with some ad-hoc developed modifications), characterized by the presence of four networks: two generators, the transformations from MR to CT domain (MR2CT) and vice versa (CT2MR) and two discriminators (fake CT vs. real CT, fake MR vs. real MR). A cyclic consistency constraint imposes that the whole cycle is the identity operator: MR ≈ MR2CT(CT2MR(MR)). This requirement, introduced in the loss function, guides the network training to generate not just an image but an image of the specific input patient. We collected a dataset of structural MR brain images coming from the Autism Brain Imaging Data Exchange (ABIDE: http://fcon_1000.projects.nitrc.org/indi/abide/) project and CT scans provided by the NeuroAnatomy and image Processing LABoratory (NAPLAB) of the IRCCS SDN (Naples, IT). We used these unpaired examples to train a CycleGAN-like network. Prior implementations of deep learning models for the generation of medical images require working on single slices of the acquired images, due to the availability of algorithms developed for 2D natural images and limitations in computing power. The proposed approach has been developed to work directly on 3D data. A registration step that aligns all images to approximately the same orientation has proved to be necessary due to the low number of training examples. In fact, no paired data is required, but in any case retrieving a brain CT from a MR is a major issue which needs a simplification at first stage. Structural similarity index computed between the generated output and the expected one shows satisfactory results. Despite a validation on a more populated dataset is needed to release the current requirements on the initial image alignment, the proposed approach opens to the perspective of using data driven methods to several processing pipelines on medical images, including data augmentation, segmentation and classification. Further investigation on the behaviour of the network in case of abnormalities in the images is required. An advantage of this technique with respect to other currently available procedures for attenuation correction in PET-MR is that it does not need any extra MR acquisition: only the standard diagnostic T 1 -weighted image is used and, due to the low computational cost, images are translated from the MR to the CT domain in a couple of seconds. Building a large collection of publicly available images could undoubtedly lead to avoiding some preprocessing steps and to achieve better overall results

    Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights

    No full text
    We propose and evaluate a procedure for the explainability of a breast density deep learning based classifier. A total of 1662 mammography exams labeled according to the BI-RADS categories of breast density was used. We built a residual Convolutional Neural Network, trained it and studied the responses of the model to input changes, such as different distributions of class labels in training and test sets and suitable image pre-processing. The aim was to identify the steps of the analysis with a relevant impact on the classifier performance and on the model explainability. We used the grad-CAM algorithm for CNN to produce saliency maps and computed the Spearman’s rank correlation between input images and saliency maps as a measure of explanation accuracy. We found that pre-processing is critical not only for accuracy, precision and recall of a model but also to have a reasonable explanation of the model itself. Our CNN reaches good performances compared to the state-of-art and it considers the dense pattern to make the classification. Saliency maps strongly correlate with the dense pattern. This work is a starting point towards the implementation of a standard framework to evaluate both CNN performances and the explainability of their predictions in medical image classification problems
    corecore