109 research outputs found
AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Deep neural networks (DNNs) are vulnerable to adversarial examples, which may
lead to catastrophe in security-critical domains. Numerous detection methods
are proposed to characterize the feature uniqueness of adversarial examples, or
to distinguish DNN's behavior activated by the adversarial examples. Detections
based on features cannot handle adversarial examples with large perturbations.
Besides, they require a large amount of specific adversarial examples. Another
mainstream, model-based detections, which characterize input properties by
model behaviors, suffer from heavy computation cost. To address the issues, we
introduce the concept of local gradient, and reveal that adversarial examples
have a quite larger bound of local gradient than the benign ones. Inspired by
the observation, we leverage local gradient for detecting adversarial examples,
and propose a general framework AdvCheck. Specifically, by calculating the
local gradient from a few benign examples and noise-added misclassified
examples to train a detector, adversarial examples and even misclassified
natural inputs can be precisely distinguished from benign ones. Through
extensive experiments, we have validated the AdvCheck's superior performance to
the state-of-the-art (SOTA) baselines, with detection rate ()
on general adversarial attacks and () on misclassified natural
inputs on average, with average 1/500 time cost. We also provide interpretable
results for successful detection.Comment: 26 page
- …