Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Abstract

Top-kk predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches. β„“0\ell_0-norm adversarial perturbation characterizes an attack that arbitrarily modifies some features of an input such that a classifier makes an incorrect prediction for the perturbed input. β„“0\ell_0-norm adversarial perturbation is easy to interpret and can be implemented in the physical world. Therefore, certifying robustness of top-kk predictions against β„“0\ell_0-norm adversarial perturbation is important. However, existing studies either focused on certifying β„“0\ell_0-norm robustness of top-11 predictions or β„“2\ell_2-norm robustness of top-kk predictions. In this work, we aim to bridge the gap. Our approach is based on randomized smoothing, which builds a provably robust classifier from an arbitrary classifier via randomizing an input. Our major theoretical contribution is an almost tight β„“0\ell_0-norm certified robustness guarantee for top-kk predictions. We empirically evaluate our method on CIFAR10 and ImageNet. For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image

    Similar works

    Full text

    thumbnail-image

    Available Versions