2 research outputs found
Adversarial Perturbations Against Real-Time Video Classification Systems
Recent research has demonstrated the brittleness of machine learning systems
to adversarial perturbations. However, the studies have been mostly limited to
perturbations on images and more generally, classification that does not deal
with temporally varying inputs. In this paper we ask "Are adversarial
perturbations possible in real-time video classification systems and if so,
what properties must they satisfy?" Such systems find application in
surveillance applications, smart vehicles, and smart elderly care and thus,
misclassification could be particularly harmful (e.g., a mishap at an elderly
care facility may be missed). We show that accounting for temporal structure is
key to generating adversarial examples in such systems. We exploit recent
advances in generative adversarial network (GAN) architectures to account for
temporal correlations and generate adversarial samples that can cause
misclassification rates of over 80% for targeted activities. More importantly,
the samples also leave other activities largely unaffected making them
extremely stealthy. Finally, we also surprisingly find that in many scenarios,
the same perturbation can be applied to every frame in a video clip that makes
the adversary's ability to achieve misclassification relatively easy