22 research outputs found
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction
Advances in deep learning (DL) have resulted in impressive accuracy in some
medical image classification tasks, but often deep models lack
interpretability. The ability of these models to explain their decisions is
important for fostering clinical trust and facilitating clinical translation.
Furthermore, for many problems in medicine there is a wealth of existing
clinical knowledge to draw upon, which may be useful in generating
explanations, but it is not obvious how this knowledge can be encoded into DL
models - most models are learnt either from scratch or using transfer learning
from a different domain. In this paper we address both of these issues. We
propose a novel DL framework for image-based classification based on a
variational autoencoder (VAE). The framework allows prediction of the output of
interest from the latent space of the autoencoder, as well as visualisation (in
the image domain) of the effects of crossing the decision boundary, thus
enhancing the interpretability of the classifier. Our key contribution is that
the VAE disentangles the latent space based on `explanations' drawn from
existing clinical knowledge. The framework can predict outputs as well as
explanations for these outputs, and also raises the possibility of discovering
new biomarkers that are separate (or disentangled) from the existing knowledge.
We demonstrate our framework on the problem of predicting response of patients
with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine
cardiac magnetic resonance images. The sensitivity and specificity of the
proposed model on the task of CRT response prediction are 88.43% and 84.39%
respectively, and we showcase the potential of our model in enhancing
understanding of the factors contributing to CRT response.Comment: MICCAI 2020 conferenc
Distilling BlackBox to Interpretable models for Efficient Transfer Learning
Building generalizable AI models is one of the primary challenges in the
healthcare domain. While radiologists rely on generalizable descriptive rules
of abnormality, Neural Network (NN) models suffer even with a slight shift in
input distribution (\eg scanner type). Fine-tuning a model to transfer
knowledge from one domain to another requires a significant amount of labeled
data in the target domain. In this paper, we develop an interpretable model
that can be efficiently fine-tuned to an unseen target domain with minimal
computational cost. We assume the interpretable component of NN to be
approximately domain-invariant. However, interpretable models typically
underperform compared to their Blackbox (BB) variants. We start with a BB in
the source domain and distill it into a \emph{mixture} of shallow interpretable
models using human-understandable concepts. As each interpretable model covers
a subset of data, a mixture of interpretable models achieves comparable
performance as BB. Further, we use the pseudo-labeling technique from
semi-supervised learning (SSL) to learn the concept classifier in the target
domain, followed by fine-tuning the interpretable models in the target domain.
We evaluate our model using a real-life large-scale chest-X-ray (CXR)
classification dataset. The code is available at:
\url{https://github.com/batmanlab/MICCAI-2023-Route-interpret-repeat-CXRs}.Comment: MICCAI, 2023, Early accep
Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images
Quantifying uncertainty of predictions has been identified as one way to
develop more trustworthy artificial intelligence (AI) models beyond
conventional reporting of performance metrics. When considering their role in a
clinical decision support setting, AI classification models should ideally
avoid confident wrong predictions and maximise the confidence of correct
predictions. Models that do this are said to be well-calibrated with regard to
confidence. However, relatively little attention has been paid to how to
improve calibration when training these models, i.e., to make the training
strategy uncertainty-aware. In this work we evaluate three novel
uncertainty-aware training strategies comparing against two state-of-the-art
approaches. We analyse performance on two different clinical applications:
cardiac resynchronisation therapy (CRT) response prediction and coronary artery
disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The
best-performing model in terms of both classification accuracy and the most
common calibration measure, expected calibration error (ECE) was the Confidence
Weight method, a novel approach that weights the loss of samples to explicitly
penalise confident incorrect predictions. The method reduced the ECE by 17% for
CRT response prediction and by 22% for CAD diagnosis when compared to a
baseline classifier in which no uncertainty-aware strategy was included. In
both applications, as well as reducing the ECE there was a slight increase in
accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD
diagnosis respectively. However, our analysis showed a lack of consistency in
terms of optimal models when using different calibration measures. This
indicates the need for careful consideration of performance metrics when
training and selecting models for complex high-risk applications in healthcare
MulViMotion: shape-aware 3D myocardial motion tracking from multi-view cardiac MRI
Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods
Exploring the applicability of machine learning based artificial intelligence in the analysis of cardiovascular imaging
Worldwide, the prevalence of cardiovascular diseases has doubled, demanding new diagnostic tools. Artificial intelligence, especially machine learning and deep learning, offers innovative possibilities for medical research. Despite historical challenges, such as a lack of data, these techniques have potential for cardiovascular research. This thesis explores the application of machine learning and deep learning in cardiology, focusing on automation and decision support in cardiovascular imaging.Part I of this thesis focuses on automating cardiovascular MRI analysis. A deep learning model was developed to analyze the ascending aorta in cardiovascular MRI images. The model's results were used to investigate connections between genetic material and aortic properties, and between aortic properties and cardiovascular diseases and mortality. A second model was developed to select MRI images suitable for analyzing the pulmonary artery.Part II focuses on decision support in nuclear cardiovascular imaging. A first machine learning model was developed to predict myocardial ischemia based on CTA variables. In addition, a deep neural network was used to identify reduced oxygen supply through the arteries supplying oxygen-rich blood to the heart and cardiovascular risk features using PET images.This thesis successfully explores the possibilities of machine learning and deep learning in cardiovascular research, with a focus on automated analysis and decision support
Exploring the applicability of machine learning based artificial intelligence in the analysis of cardiovascular imaging
Worldwide, the prevalence of cardiovascular diseases has doubled, demanding new diagnostic tools. Artificial intelligence, especially machine learning and deep learning, offers innovative possibilities for medical research. Despite historical challenges, such as a lack of data, these techniques have potential for cardiovascular research. This thesis explores the application of machine learning and deep learning in cardiology, focusing on automation and decision support in cardiovascular imaging.Part I of this thesis focuses on automating cardiovascular MRI analysis. A deep learning model was developed to analyze the ascending aorta in cardiovascular MRI images. The model's results were used to investigate connections between genetic material and aortic properties, and between aortic properties and cardiovascular diseases and mortality. A second model was developed to select MRI images suitable for analyzing the pulmonary artery.Part II focuses on decision support in nuclear cardiovascular imaging. A first machine learning model was developed to predict myocardial ischemia based on CTA variables. In addition, a deep neural network was used to identify reduced oxygen supply through the arteries supplying oxygen-rich blood to the heart and cardiovascular risk features using PET images.This thesis successfully explores the possibilities of machine learning and deep learning in cardiovascular research, with a focus on automated analysis and decision support
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
Artificial intelligence (AI) models are increasingly finding applications in
the field of medicine. Concerns have been raised about the explainability of
the decisions that are made by these AI models. In this article, we give a
systematic analysis of explainable artificial intelligence (XAI), with a
primary focus on models that are currently being used in the field of
healthcare. The literature search is conducted following the preferred
reporting items for systematic reviews and meta-analyses (PRISMA) standards for
relevant work published from 1 January 2012 to 02 February 2022. The review
analyzes the prevailing trends in XAI and lays out the major directions in
which research is headed. We investigate the why, how, and when of the uses of
these XAI models and their implications. We present a comprehensive examination
of XAI methodologies as well as an explanation of how a trustworthy AI can be
derived from describing AI models for healthcare fields. The discussion of this
work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE
Transactions on Artificial Intelligenc