2 research outputs found
Domain Invariant Adversarial Learning
The phenomenon of adversarial examples illustrates one of the most basic
vulnerabilities of deep neural networks. Among the variety of techniques
introduced to surmount this inherent weakness, adversarial training has emerged
as the most common and efficient strategy to achieve robustness. Typically,
this is achieved by balancing robust and natural objectives. In this work, we
aim to achieve better trade-off between robust and natural performances by
enforcing a domain-invariant feature representation. We present a new
adversarial training method, Domain Invariant Adversarial Learning (DIAL),
which learns a feature representation which is both robust and domain
invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on
the natural domain and its corresponding adversarial domain. In a case where
the source domain consists of natural examples and the target domain is the
adversarially perturbed examples, our method learns a feature representation
constrained not to discriminate between the natural and adversarial examples,
and can therefore achieve a more robust representation. Our experiments
indicate that our method improves both robustness and natural accuracy, when
compared to current state-of-the-art adversarial training methods