3 research outputs found
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks
We propose FLARE, the first fingerprinting mechanism to verify whether a
suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of
another (victim) policy. We first show that it is possible to find
non-transferable, universal adversarial masks, i.e., perturbations, to generate
adversarial examples that can successfully transfer from a victim policy to its
modified versions but not to independently trained policies. FLARE employs
these masks as fingerprints to verify the true ownership of stolen DRL policies
by measuring an action agreement value over states perturbed via such masks.
Our empirical evaluations show that FLARE is effective (100% action agreement
on stolen copies) and does not falsely accuse independent policies (no false
positives). FLARE is also robust to model modification attacks and cannot be
easily evaded by more informed adversaries without negatively impacting agent
performance. We also show that not all universal adversarial masks are suitable
candidates for fingerprints due to the inherent characteristics of DRL
policies. The spatio-temporal dynamics of DRL problems and sequential
decision-making process make characterizing the decision boundary of DRL
policies more difficult, as well as searching for universal masks that capture
the geometry of it.Comment: Will appear in the proceedings of ACSAC 2023; 13 pages, 5 figures, 7
table
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses
Recent work has shown that deep reinforcement learning (DRL) policies are
vulnerable to adversarial perturbations. Adversaries can mislead policies of
DRL agents by perturbing the state of the environment observed by the agents.
Existing attacks are feasible in principle but face challenges in practice, for
example by being too slow to fool DRL policies in real time. We show that using
the Universal Adversarial Perturbation (UAP) method to compute perturbations,
independent of the individual inputs to which they are applied to, can fool DRL
policies effectively and in real time. We describe three such attack variants.
Via an extensive evaluation using three Atari 2600 games, we show that our
attacks are effective, as they fully degrade the performance of three different
DRL agents (up to 100%, even when the bound on the perturbation is
as small as 0.01). It is faster compared to the response time (0.6ms on
average) of different DRL policies, and considerably faster than prior attacks
using adversarial perturbations (1.8ms on average). We also show that our
attack technique is efficient, incurring an online computational cost of
0.027ms on average. Using two further tasks involving robotic movement, we
confirm that our results generalize to more complex DRL tasks. Furthermore, we
demonstrate that the effectiveness of known defenses diminishes against
universal perturbations. We propose an effective technique that detects all
known adversarial perturbations against DRL policies, including all the
universal perturbations presented in this paper.Comment: 13 pages, 6 figure