1 research outputs found
Robustifying Reinforcement Learning Agents via Action Space Adversarial Training
Adoption of machine learning (ML)-enabled cyber-physical systems (CPS) are
becoming prevalent in various sectors of modern society such as transportation,
industrial, and power grids. Recent studies in deep reinforcement learning
(DRL) have demonstrated its benefits in a large variety of data-driven
decisions and control applications. As reliance on ML-enabled systems grows, it
is imperative to study the performance of these systems under malicious state
and actuator attacks. Traditional control systems employ
resilient/fault-tolerant controllers that counter these attacks by correcting
the system via error observations. However, in some applications, a resilient
controller may not be sufficient to avoid a catastrophic failure. Ideally, a
robust approach is more useful in these scenarios where a system is inherently
robust (by design) to adversarial attacks. While robust control has a long
history of development, robust ML is an emerging research area that has already
demonstrated its relevance and urgency. However, the majority of robust ML
research has focused on perception tasks and not on decision and control tasks,
although the ML (specifically RL) models used for control applications are
equally vulnerable to adversarial attacks. In this paper, we show that a
well-performing DRL agent that is initially susceptible to action space
perturbations (e.g. actuator attacks) can be robustified against similar
perturbations through adversarial training.Comment: Accepted for publication in American Control Conference 2020, 6 Page