1 research outputs found
Adversarial attacks on Copyright Detection Systems
It is well-known that many machine learning models are susceptible to
adversarial attacks, in which an attacker evades a classifier by making small
perturbations to inputs. This paper discusses how industrial copyright
detection tools, which serve a central role on the web, are susceptible to
adversarial attacks. We discuss a range of copyright detection systems, and why
they are particularly vulnerable to attacks. These vulnerabilities are
especially apparent for neural network based systems. As a proof of concept, we
describe a well-known music identification method, and implement this system in
the form of a neural net. We then attack this system using simple gradient
methods. Adversarial music created this way successfully fools industrial
systems, including the AudioTag copyright detector and YouTube's Content ID
system. Our goal is to raise awareness of the threats posed by adversarial
examples in this space, and to highlight the importance of hardening copyright
detection systems to attacks