1 research outputs found
Harnessing adversarial examples with a surprisingly simple defense
I introduce a very simple method to defend against adversarial examples. The
basic idea is to raise the slope of the ReLU function at the test time.
Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of
the proposed defense against a number of strong attacks in both untargeted and
targeted settings. While perhaps not as effective as the state of the art
adversarial defenses, this approach can provide insights to understand and
mitigate adversarial attacks. It can also be used in conjunction with other
defenses