VANT-GAN: adversarial learning for discrepancy-based visual attribution in medical imaging

Abstract

Visual attribution (VA) in relation to medical images is an essential aspect of modern automation-assisted diagnosis. Since it is generally not straightforward to obtain pixel-level ground-truth labelling of medical images, classification-based interpretation approaches have become the de facto standard for automated diagnosis, in which the ability of classifiers to make categorical predictions based on class-salient regions is harnessed within the learning algorithm. Such regions, however, typically constitute only a small subset of the full range of features of potential medical interest. They may hence not be useful for VA of medical images where capturing all of the disease evidence is a critical requirement. This hence motivates the proposal of a novel strategy for visual attribution that is not reliant on image classification. We instead obtain normal counterparts of abnormal images and find discrepancy maps between the two. To perform the abnormal-to-normal mapping in unsupervised way, we employ a Cycle-Consistency Generative Adversarial Network, thereby formulating visual attribution in terms of a discrepancy map that, when subtracted from the abnormal image, makes it indistinguishable from the counterpart normal image. Experiments are performed on three datasets including a synthetic, Alzheimer’s disease Neuro imaging Initiative and, BraTS dataset. We outperform baseline and related methods in both experiments

    Similar works