Adversarial examples have been found for various deep as well as shallow
learning models, and have at various times been suggested to be either fixable
model-specific bugs, or else inherent dataset feature, or both. We present
theoretical and empirical results to show that adversarial examples are
approximate discontinuities resulting from models that specify approximately
bijective maps f:RnβRm;nξ =m over their inputs, and this
discontinuity follows from the topological invariance of dimension.Comment: 6 page