87 research outputs found

    An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification

    Full text link
    While deep learning methods are increasingly being applied to tasks such as computer-aided diagnosis, these models are difficult to interpret, do not incorporate prior domain knowledge, and are often considered as a "black-box." The lack of model interpretability hinders them from being fully understood by target users such as radiologists. In this paper, we present a novel interpretable deep hierarchical semantic convolutional neural network (HSCNN) to predict whether a given pulmonary nodule observed on a computed tomography (CT) scan is malignant. Our network provides two levels of output: 1) low-level radiologist semantic features, and 2) a high-level malignancy prediction score. The low-level semantic outputs quantify the diagnostic features used by radiologists and serve to explain how the model interprets the images in an expert-driven manner. The information from these low-level tasks, along with the representations learned by the convolutional layers, are then combined and used to infer the high-level task of predicting nodule malignancy. This unified architecture is trained by optimizing a global loss function including both low- and high-level tasks, thereby learning all the parameters within a joint framework. Our experimental results using the Lung Image Database Consortium (LIDC) show that the proposed method not only produces interpretable lung cancer predictions but also achieves significantly better results compared to common 3D CNN approaches

    LungVISX:explaining lung nodule malignancy classification

    Get PDF

    Attention-Enhanced Cross-Task Network for Analysing Multiple Attributes of Lung Nodules in CT

    Full text link
    Accurate characterisation of visual attributes such as spiculation, lobulation, and calcification of lung nodules is critical in cancer management. The characterisation of these attributes is often subjective, which may lead to high inter- and intra-observer variability. Furthermore, lung nodules are often heterogeneous in the cross-sectional image slices of a 3D volume. Current state-of-the-art methods that score multiple attributes rely on deep learning-based multi-task learning (MTL) schemes. These methods, however, extract shared visual features across attributes and then examine each attribute without explicitly leveraging their inherent intercorrelations. Furthermore, current methods either treat each slice with equal importance without considering their relevance or heterogeneity, which limits performance. In this study, we address these challenges with a new convolutional neural network (CNN)-based MTL model that incorporates multiple attention-based learning modules to simultaneously score 9 visual attributes of lung nodules in computed tomography (CT) image volumes. Our model processes entire nodule volumes of arbitrary depth and uses a slice attention module to filter out irrelevant slices. We also introduce cross-attribute and attribute specialisation attention modules that learn an optimal amalgamation of meaningful representations to leverage relationships between attributes. We demonstrate that our model outperforms previous state-of-the-art methods at scoring attributes using the well-known public LIDC-IDRI dataset of pulmonary nodules from over 1,000 patients. Our model also performs competitively when repurposed for benign-malignant classification. Our attention modules also provide easy-to-interpret weights that offer insights into the predictions of the model

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Full text link
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    Towards generalizable machine learning models for computer-aided diagnosis in medicine

    Get PDF
    Hidden stratification represents a phenomenon in which a training dataset contains unlabeled (hidden) subsets of cases that may affect machine learning model performance. Machine learning models that ignore the hidden stratification phenomenon--despite promising overall performance measured as accuracy and sensitivity--often fail at predicting the low prevalence cases, but those cases remain important. In the medical domain, patients with diseases are often less common than healthy patients, and a misdiagnosis of a patient with a disease can have significant clinical impacts. Therefore, to build a robust and trustworthy CAD system and a reliable treatment effect prediction model, we cannot only pursue machine learning models with high overall accuracy, but we also need to discover any hidden stratification in the data and evaluate the proposing machine learning models with respect to both overall performance and the performance on certain subsets (groups) of the data, such as the ‘worst group’. In this study, I investigated three approaches for data stratification: a novel algorithmic deep learning (DL) approach that learns similarities among cases and two schema completion approaches that utilize domain expert knowledge. I further proposed an innovative way to integrate the discovered latent groups into the loss functions of DL models to allow for better model generalizability under the domain shift scenario caused by the data heterogeneity. My results on lung nodule Computed Tomography (CT) images and breast cancer histopathology images demonstrate that learning homogeneous groups within heterogeneous data significantly improves the performance of the computer-aided diagnosis (CAD) system, particularly for low-prevalence or worst-performing cases. This study emphasizes the importance of discovering and learning the latent stratification within the data, as it is a critical step towards building ML models that are generalizable and reliable. Ultimately, this discovery can have a profound impact on clinical decision-making, particularly for low-prevalence cases

    Interpretable Medical Image Classification using Prototype Learning and Privileged Information

    Full text link
    Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 %) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.Comment: MICCAI 2023 Medical Image Computing and Computer Assisted Interventio

    3D Lung Nodule Classification in Computed Tomography Images

    Get PDF
    Lung cancer is the leading cause of cancer death worldwide. One of the reasons is the absence of symptoms at an early stage, which means that it is only discovered at a later stage, where the treatment is more difficult [1]. Furthermore, when making a diagnosis, frequently done by reading computed tomographies (CT's), it is regularly allied with errors. One of the reasons is the variation of the opinion of the doctors regarding the diagnosis of the same nodule [2,3].The use of CADx, Computer-Aided Diagnosis, systems can be a great help for this problem by assisting doctors in diagnosis with a second opinion. Although its efficiency has already been proven [4], it often ends up not being used because doctors can not understand the "how and why" of CADx diagnostic results, and ultimately do not trust the system [5]. To increase the radiologists' confidence in the CADx system it is proposed that along with the results of malignancy prediction, there are also results with evidence that explains those malignancy results.There are some visible features in lung nodules that are correlated with malignancy. Since humans are able to visually identify these characteristics and correlate them with nodule malignancy, one way to present those evidence is to make predictions of those characteristics. To have these predictions it is proposed to use deep learning approaches. Convolutional neural networks had shown to outperform the state of the art results in medical image analysis [6]. To predict the characteristics and malignancy in CADx system, the architecture HSCNN, a deep hierarchical semantic convolutional neural network, proposed by Shen et al. [7], will be used.The Lung Image Database Consortium image collection (LIDC-IDRI) public dataset is frequently used as input for lung cancer CADx systems. The LIDC-IDRI consists of thoracic CT scans, presenting a lot of data's quantity and variability. In most of the nodules, this dataset has doctor's evaluations for 9 different characteristics. A recurrent problem in those evaluations is the subjectivity of the doctors' interpretation in what each characteristic is. In some characteristics, it can result in a great divergence in evaluations regarding the same nodule, which makes the inclusion of those evaluations as an input in CADx systems not useful as it could be. To reduce this subjectivity, it is proposed the creation of a metric that makes the characteristics classification more objective. For this, it is planned bibliographic and LIDC-IDRI dataset reviews. With that, taking into account this new metric, validated after by doctors from Hospital de São João, will be made a reclassification in LIDC-IDRI dataset. This way it could be possible to use as input all the relevant characteristics. The principal objective of this dissertation is to develop a lung nodule CADx system methodology which promotes the confidence of specialists in its use. This will be made classifying lung nodules according to relevant characteristics to diagnosis and malignancy. The reclassified LIDC-IDRI dataset will be used as an input for CADx system and the architecture used for predicting the characteristics and malignancy results will be the HSCNN. To measure the classification evaluation will be used sensitivity, sensibility, and area under the Receiver Operating Characteristic (ROC), curve. The proposed solution may be used for improving a CADx system, LNDetector, currently in development by the Center for Biomedical Engineering Research (C-BER) group from INESC-TEC in which this work will be developed.[1] - S. Sone M. Hasegawa and S. Takashima. Growth rate of small lung cancels detected on mass ct screening. Tire British Journal of Radiology, pages 1252-1259[2] - D. J. Bell S. E. Marley P. Guo H. Mann M. L. Scott L. H. Schwartz D. C. Ghiorghiu B. Zhao, Y. Tan. Exploring intra-and inter-reader variability in uni-dimensional, bi-dimensional, and volumetric measurements of solid tumors on ct scans reconstructed at different slice intervals. European journal of radiology 82, page 959-968, 2013[3] - H.T Winer-Muram. The solitary pulmonary nodule 1. Radiology, 239, pages 39-49, 2006.[4] - R. Yan J. Lee L. C. Chu C. T. Lin A. Hussien J. Rathmell B. Thomas C. Chen et al. P. Huang, S. Park. Added value of computer-aided ct image features for early lung cancer diagnosis with small pulmonary nodules: A matched case-control study. Radiology 286, page 286-295, 2017[5] - W Jorritsma, Fokie Cnossen, and Peter Van Ooijen. Improving the radiologist-cad interaction: Designing for appropriate trust. Clinical Radiology, 70, 10 2014.[6] - Tom Brosch, Youngjin Yoo, David Li, Anthony Traboulsee, and Roger Tam. Modeling the variability in brain morphology and lesion distribution in multiple sclerosis by deep learning. Volume 17, 09 2014.[7] - Simon Aberle Deni A. T. Bui Alex Hsu Willliam Shen, Shiwen X. Han. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. june 201
    • …
    corecore