Adversarial Examples from Dimensional Invariance

Abstract

Adversarial examples have been found for various deep as well as shallow learning models, and have at various times been suggested to be either fixable model-specific bugs, or else inherent dataset feature, or both. We present theoretical and empirical results to show that adversarial examples are approximate discontinuities resulting from models that specify approximately bijective maps f:Rn→Rm;n≠mf: \Bbb R^n \to \Bbb R^m; n \neq m over their inputs, and this discontinuity follows from the topological invariance of dimension.Comment: 6 page

    Similar works

    Full text

    thumbnail-image

    Available Versions