5,299 research outputs found
Adversarially Robust Distillation
Knowledge distillation is effective for producing small, high-performance
neural networks for classification, but these small networks are vulnerable to
adversarial attacks. This paper studies how adversarial robustness transfers
from teacher to student during knowledge distillation. We find that a large
amount of robustness may be inherited by the student even when distilled on
only clean images. Second, we introduce Adversarially Robust Distillation (ARD)
for distilling robustness onto student networks. In addition to producing small
models with high test accuracy like conventional distillation, ARD also passes
the superior robustness of large networks onto the student. In our experiments,
we find that ARD student models decisively outperform adversarially trained
networks of identical architecture in terms of robust accuracy, surpassing
state-of-the-art methods on standard robustness benchmarks. Finally, we adapt
recent fast adversarial training methods to ARD for accelerated robust
distillation.Comment: Accepted to AAAI Conference on Artificial Intelligence, 202
DAD++: Improved Data-free Test Time Adversarial Defense
With the increasing deployment of deep neural networks in safety-critical
applications such as self-driving cars, medical imaging, anomaly detection,
etc., adversarial robustness has become a crucial concern in the reliability of
these networks in real-world scenarios. A plethora of works based on
adversarial training and regularization-based techniques have been proposed to
make these deep networks robust against adversarial attacks. However, these
methods require either retraining models or training them from scratch, making
them infeasible to defend pre-trained models when access to training data is
restricted. To address this problem, we propose a test time Data-free
Adversarial Defense (DAD) containing detection and correction frameworks.
Moreover, to further improve the efficacy of the correction framework in cases
when the detector is under-confident, we propose a soft-detection scheme
(dubbed as "DAD++"). We conduct a wide range of experiments and ablations on
several datasets and network architectures to show the efficacy of our proposed
approach. Furthermore, we demonstrate the applicability of our approach in
imparting adversarial defense at test time under data-free (or data-efficient)
applications/setups, such as Data-free Knowledge Distillation and Source-free
Unsupervised Domain Adaptation, as well as Semi-supervised classification
frameworks. We observe that in all the experiments and applications, our DAD++
gives an impressive performance against various adversarial attacks with a
minimal drop in clean accuracy. The source code is available at:
https://github.com/vcl-iisc/Improved-Data-free-Test-Time-Adversarial-DefenseComment: IJCV Journal (Under Review
Learning Segmentation Masks with the Independence Prior
An instance with a bad mask might make a composite image that uses it look
fake. This encourages us to learn segmentation by generating realistic
composite images. To achieve this, we propose a novel framework that exploits a
new proposed prior called the independence prior based on Generative
Adversarial Networks (GANs). The generator produces an image with multiple
category-specific instance providers, a layout module and a composition module.
Firstly, each provider independently outputs a category-specific instance image
with a soft mask. Then the provided instances' poses are corrected by the
layout module. Lastly, the composition module combines these instances into a
final image. Training with adversarial loss and penalty for mask area, each
provider learns a mask that is as small as possible but enough to cover a
complete category-specific instance. Weakly supervised semantic segmentation
methods widely use grouping cues modeling the association between image parts,
which are either artificially designed or learned with costly segmentation
labels or only modeled on local pairs. Unlike them, our method automatically
models the dependence between any parts and learns instance segmentation. We
apply our framework in two cases: (1) Foreground segmentation on
category-specific images with box-level annotation. (2) Unsupervised learning
of instance appearances and masks with only one image of homogeneous object
cluster (HOC). We get appealing results in both tasks, which shows the
independence prior is useful for instance segmentation and it is possible to
unsupervisedly learn instance masks with only one image.Comment: 7+5 pages, 13 figures, Accepted to AAAI 201
- …