85 research outputs found
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to
so-called adversarial attacks where input examples are intentionally perturbed
to fool DNNs. In this work, we revisit the DNN training process that includes
adversarial examples into the training dataset so as to improve DNN's
resilience to adversarial attacks, namely, adversarial training. Our
experiments show that different adversarial strengths, i.e., perturbation
levels of adversarial examples, have different working zones to resist the
attack. Based on the observation, we propose a multi-strength adversarial
training method (MAT) that combines the adversarial training examples with
different adversarial strengths to defend adversarial attacks. Two training
structures - mixed MAT and parallel MAT - are developed to facilitate the
tradeoffs between training time and memory occupation. Our results show that
MAT can substantially minimize the accuracy degradation of deep learning
systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table
Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting
Recently, many semi-supervised object detection (SSOD) methods adopt
teacher-student framework and have achieved state-of-the-art results. However,
the teacher network is tightly coupled with the student network since the
teacher is an exponential moving average (EMA) of the student, which causes a
performance bottleneck. To address the coupling problem, we propose a Cycle
Self-Training (CST) framework for SSOD, which consists of two teachers T1 and
T2, two students S1 and S2. Based on these networks, a cycle self-training
mechanism is built, i.e.,
S1T1S2T2S1. For
ST, we also utilize the EMA weights of the students to update
the teachers. For TS, instead of providing supervision for its
own student S1(S2) directly, the teacher T1(T2) generates pseudo-labels for the
student S2(S1), which looses the coupling effect. Moreover, owing to the
property of EMA, the teacher is most likely to accumulate the biases from the
student and make the mistakes irreversible. To mitigate the problem, we also
propose a distribution consistency reweighting strategy, where pseudo-labels
are reweighted based on distribution consistency across the teachers T1 and T2.
With the strategy, the two students S2 and S1 can be trained robustly with
noisy pseudo labels to avoid confirmation biases. Extensive experiments prove
the superiority of CST by consistently improving the AP over the baseline and
outperforming state-of-the-art methods by 2.1% absolute AP improvements with
scarce labeled data.Comment: ACM Multimedia 202
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Recently smoothing deep neural network based classifiers via isotropic
Gaussian perturbation is shown to be an effective and scalable way to provide
state-of-the-art probabilistic robustness guarantee against norm
bounded adversarial perturbations. However, how to train a good base classifier
that is accurate and robust when smoothed has not been fully investigated. In
this work, we derive a new regularized risk, in which the regularizer can
adaptively encourage the accuracy and robustness of the smoothed counterpart
when training the base classifier. It is computationally efficient and can be
implemented in parallel with other empirical defense methods. We discuss how to
implement it under both standard (non-adversarial) and adversarial training
scheme. At the same time, we also design a new certification algorithm, which
can leverage the regularization effect to provide tighter robustness lower
bound that holds with high probability. Our extensive experimentation
demonstrates the effectiveness of the proposed training and certification
approaches on CIFAR-10 and ImageNet datasets.Comment: AAAI202
- …