12 research outputs found
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Reinforcement learning allows machines to learn from their own experience.
Nowadays, it is used in safety-critical applications, such as autonomous
driving, despite being vulnerable to attacks carefully crafted to either
prevent that the reinforcement learning algorithm learns an effective and
reliable policy, or to induce the trained agent to make a wrong decision. The
literature about the security of reinforcement learning is rapidly growing, and
some surveys have been proposed to shed light on this field. However, their
categorizations are insufficient for choosing an appropriate defense given the
kind of system at hand. In our survey, we do not only overcome this limitation
by considering a different perspective, but we also discuss the applicability
of state-of-the-art attacks and defenses when reinforcement learning algorithms
are used in the context of autonomous driving
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
A trustworthy reinforcement learning algorithm should be competent in solving
challenging real-world problems, including {robustly} handling uncertainties,
satisfying {safety} constraints to avoid catastrophic failures, and
{generalizing} to unseen scenarios during deployments. This study aims to
overview these main perspectives of trustworthy reinforcement learning
considering its intrinsic vulnerabilities on robustness, safety, and
generalizability. In particular, we give rigorous formulations, categorize
corresponding methodologies, and discuss benchmarks for each perspective.
Moreover, we provide an outlook section to spur promising future directions
with a brief discussion on extrinsic vulnerabilities considering human
feedback. We hope this survey could bring together separate threads of studies
together in a unified framework and promote the trustworthiness of
reinforcement learning.Comment: 36 pages, 5 figure