167 research outputs found
An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification
While deep learning methods are increasingly being applied to tasks such as
computer-aided diagnosis, these models are difficult to interpret, do not
incorporate prior domain knowledge, and are often considered as a "black-box."
The lack of model interpretability hinders them from being fully understood by
target users such as radiologists. In this paper, we present a novel
interpretable deep hierarchical semantic convolutional neural network (HSCNN)
to predict whether a given pulmonary nodule observed on a computed tomography
(CT) scan is malignant. Our network provides two levels of output: 1) low-level
radiologist semantic features, and 2) a high-level malignancy prediction score.
The low-level semantic outputs quantify the diagnostic features used by
radiologists and serve to explain how the model interprets the images in an
expert-driven manner. The information from these low-level tasks, along with
the representations learned by the convolutional layers, are then combined and
used to infer the high-level task of predicting nodule malignancy. This unified
architecture is trained by optimizing a global loss function including both
low- and high-level tasks, thereby learning all the parameters within a joint
framework. Our experimental results using the Lung Image Database Consortium
(LIDC) show that the proposed method not only produces interpretable lung
cancer predictions but also achieves significantly better results compared to
common 3D CNN approaches
Artificial Intelligence for Thyroid Nodule Characterization: Where Are We Standing?
Machine learning (ML) is an interdisciplinary sector in the subset of artificial intelligence (AI) that creates systems to set up logical connections using algorithms, and thus offers predictions for complex data analysis. In the present review, an up-to-date summary of the current state of the art regarding ML and AI implementation for thyroid nodule ultrasound characterization and cancer is provided, highlighting controversies over AI application as well as possible benefits of ML, such as, for example, training purposes. There is evidence that AI increases diagnostic accuracy and significantly limits inter-observer variability by using standardized mathematical algorithms. It could also be of aid in practice settings with limited sub-specialty expertise, offering a second opinion by means of radiomics and computer-assisted diagnosis. The introduction of AI represents a revolutionary event in thyroid nodule evaluation, but key issues for further implementation include integration with radiologist expertise, impact on workflow and efficiency, and performance monitoring
Differentiation of thyroid nodules on US using features learned and extracted from various convolutional neural networks
Thyroid nodules are a common clinical problem. Ultrasonography (US) is the main tool used to sensitively diagnose thyroid cancer. Although US is non-invasive and can accurately differentiate benign and malignant thyroid nodules, it is subjective and its results inevitably lack reproducibility. Therefore, to provide objective and reliable information for US assessment, we developed a CADx system that utilizes convolutional neural networks and the machine learning technique. The diagnostic performances of 6 radiologists and 3 representative results obtained from the proposed CADx system were compared and analyzed.ope
Are Deep Learning Classification Results Obtained on CT Scans Fair and Interpretable?
Following the great success of various deep learning methods in image and
object classification, the biomedical image processing society is also
overwhelmed with their applications to various automatic diagnosis cases.
Unfortunately, most of the deep learning-based classification attempts in the
literature solely focus on the aim of extreme accuracy scores, without
considering interpretability, or patient-wise separation of training and test
data. For example, most lung nodule classification papers using deep learning
randomly shuffle data and split it into training, validation, and test sets,
causing certain images from the CT scan of a person to be in the training set,
while other images of the exact same person to be in the validation or testing
image sets. This can result in reporting misleading accuracy rates and the
learning of irrelevant features, ultimately reducing the real-life usability of
these models. When the deep neural networks trained on the traditional, unfair
data shuffling method are challenged with new patient images, it is observed
that the trained models perform poorly. In contrast, deep neural networks
trained with strict patient-level separation maintain their accuracy rates even
when new patient images are tested. Heat-map visualizations of the activations
of the deep neural networks trained with strict patient-level separation
indicate a higher degree of focus on the relevant nodules. We argue that the
research question posed in the title has a positive answer only if the deep
neural networks are trained with images of patients that are strictly isolated
from the validation and testing patient sets.Comment: This version has been submitted to CAAI Transactions on Intelligence
Technology. 202
- …