3 research outputs found
Defending against adversarial attacks on medical imaging AI system, classification or detection?
Medical imaging AI systems such as disease classification and segmentation
are increasingly inspired and transformed from computer vision based AI
systems. Although an array of adversarial training and/or loss function based
defense techniques have been developed and proved to be effective in computer
vision, defending against adversarial attacks on medical images remains largely
an uncharted territory due to the following unique challenges: 1) label
scarcity in medical images significantly limits adversarial generalizability of
the AI system; 2) vastly similar and dominant fore- and background in medical
images make it hard samples for learning the discriminating features between
different disease classes; and 3) crafted adversarial noises added to the
entire medical image as opposed to the focused organ target can make clean and
adversarial examples more discriminate than that between different disease
classes. In this paper, we propose a novel robust medical imaging AI framework
based on Semi-Supervised Adversarial Training (SSAT) and Unsupervised
Adversarial Detection (UAD), followed by designing a new measure for assessing
systems adversarial risk. We systematically demonstrate the advantages of our
robust medical imaging AI system over the existing adversarial defense
techniques under diverse real-world settings of adversarial attacks using a
benchmark OCT imaging data set
Bias Field Poses a Threat to DNN-based X-Ray Recognition
The chest X-ray plays a key role in screening and diagnosis of many lung
diseases including the COVID-19. More recently, many works construct deep
neural networks (DNNs) for chest X-ray images to realize automated and
efficient diagnosis of lung diseases. However, bias field caused by the
improper medical image acquisition process widely exists in the chest X-ray
images while the robustness of DNNs to the bias field is rarely explored, which
definitely poses a threat to the X-ray-based automated diagnosis system. In
this paper, we study this problem based on the recent adversarial attack and
propose a brand new attack, i.e., the adversarial bias field attack where the
bias field instead of the additive noise works as the adversarial perturbations
for fooling the DNNs. This novel attack posts a key problem: how to locally
tune the bias field to realize high attack success rate while maintaining its
spatial smoothness to guarantee high realisticity. These two goals contradict
each other and thus has made the attack significantly challenging. To overcome
this challenge, we propose the adversarial-smooth bias field attack that can
locally tune the bias field with joint smooth & adversarial constraints. As a
result, the adversarial X-ray images can not only fool the DNNs effectively but
also retain very high level of realisticity. We validate our method on real
chest X-ray datasets with powerful DNNs, e.g., ResNet50, DenseNet121, and
MobileNet, and show different properties to the state-of-the-art attacks in
both image realisticity and attack transferability. Our method reveals the
potential threat to the DNN-based X-ray automated diagnosis and can definitely
benefit the development of bias-field-robust automated diagnosis system.Comment: 9 pages, 6 figure
Certifiably Robust Interpretation via Renyi Differential Privacy
Motivated by the recent discovery that the interpretation maps of CNNs could
easily be manipulated by adversarial attacks against network interpretability,
we study the problem of interpretation robustness from a new perspective of
\Renyi differential privacy (RDP). The advantages of our Renyi-Robust-Smooth
(RDP-based interpretation method) are three-folds. First, it can offer provable
and certifiable top- robustness. That is, the top- important attributions
of the interpretation map are provably robust under any input perturbation with
bounded -norm (for any , including ). Second, our
proposed method offers better experimental robustness than existing
approaches in terms of the top- attributions. Remarkably, the accuracy of
Renyi-Robust-Smooth also outperforms existing approaches. Third, our method can
provide a smooth tradeoff between robustness and computational efficiency.
Experimentally, its top- attributions are {\em twice} more robust than
existing approaches when the computational resources are highly constrained.Comment: 19 page main text + appendi