306 research outputs found
Explaining the Black-box Smoothly- A Counterfactual Approach
We propose a BlackBox \emph{Counterfactual Explainer} that is explicitly
developed for medical imaging applications. Classical approaches (e.g. saliency
maps) assessing feature importance do not explain \emph{how} and \emph{why}
variations in a particular anatomical region is relevant to the outcome, which
is crucial for transparent decision making in healthcare application. Our
framework explains the outcome by gradually \emph{exaggerating} the semantic
effect of the given outcome label. Given a query input to a classifier,
Generative Adversarial Networks produce a progressive set of perturbations to
the query image that gradually changes the posterior probability from its
original class to its negation. We design the loss function to ensure that
essential and potentially relevant details, such as support devices, are
preserved in the counterfactually generated images. We provide an extensive
evaluation of different classification tasks on the chest X-Ray images. Our
experiments show that a counterfactually generated visual explanation is
consistent with the disease's clinical relevant measurements, both
quantitatively and qualitatively.Comment: Under review for IEEE-TMI journa
The Effectiveness of Transfer Learning Systems on Medical Images
Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem.
The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies.
We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis
Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models
Advances in deep neural networks (DNNs) have shown tremendous promise in the
medical domain. However, the deep learning tools that are helping the domain,
can also be used against it. Given the prevalence of fraud in the healthcare
domain, it is important to consider the adversarial use of DNNs in manipulating
sensitive data that is crucial to patient healthcare. In this work, we present
the design and implementation of a DNN-based image translation attack on
biomedical imagery. More specifically, we propose Jekyll, a neural style
transfer framework that takes as input a biomedical image of a patient and
translates it to a new image that indicates an attacker-chosen disease
condition. The potential for fraudulent claims based on such generated 'fake'
medical images is significant, and we demonstrate successful attacks on both
X-rays and retinal fundus image modalities. We show that these attacks manage
to mislead both medical professionals and algorithmic detection schemes.
Lastly, we also investigate defensive measures based on machine learning to
detect images generated by Jekyll.Comment: Published in proceedings of the 5th European Symposium on Security
and Privacy (EuroS&P '20
Generative models improve fairness of medical classifiers under distribution shifts
A ubiquitous challenge in machine learning is the problem of domain
generalisation. This can exacerbate bias against groups or labels that are
underrepresented in the datasets used for model development. Model bias can
lead to unintended harms, especially in safety-critical applications like
healthcare. Furthermore, the challenge is compounded by the difficulty of
obtaining labelled data due to high cost or lack of readily available domain
expertise. In our work, we show that learning realistic augmentations
automatically from data is possible in a label-efficient manner using
generative models. In particular, we leverage the higher abundance of
unlabelled data to capture the underlying data distribution of different
conditions and subgroups for an imaging modality. By conditioning generative
models on appropriate labels, we can steer the distribution of synthetic
examples according to specific requirements. We demonstrate that these learned
augmentations can surpass heuristic ones by making models more robust and
statistically fair in- and out-of-distribution. To evaluate the generality of
our approach, we study 3 distinct medical imaging contexts of varying
difficulty: (i) histopathology images from a publicly available generalisation
benchmark, (ii) chest X-rays from publicly available clinical datasets, and
(iii) dermatology images characterised by complex shifts and imaging
conditions. Complementing real training samples with synthetic ones improves
the robustness of models in all three medical tasks and increases fairness by
improving the accuracy of diagnosis within underrepresented groups. This
approach leads to stark improvements OOD across modalities: 7.7% prediction
accuracy improvement in histopathology, 5.2% in chest radiology with 44.6%
lower fairness gap and a striking 63.5% improvement in high-risk sensitivity
for dermatology with a 7.5x reduction in fairness gap
- …