244 research outputs found
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning
As the applications of deep reinforcement learning (DRL) in wireless
communications grow, sensitivity of DRL based wireless communication strategies
against adversarial attacks has started to draw increasing attention. In order
to address such sensitivity and alleviate the resulting security concerns, we
in this paper consider a victim user that performs DRL-based dynamic channel
access, and an attacker that executes DRLbased jamming attacks to disrupt the
victim. Hence, both the victim and attacker are DRL agents and can interact
with each other, retrain their models, and adapt to opponents' policies. In
this setting, we initially develop an adversarial jamming attack policy that
aims at minimizing the accuracy of victim's decision making on dynamic channel
access. Subsequently, we devise defense strategies against such an attacker,
and propose three defense strategies, namely diversified defense with
proportional-integral-derivative (PID) control, diversified defense with an
imitation attacker, and defense via orthogonal policies. We design these
strategies to maximize the attacked victim's accuracy and evaluate their
performances.Comment: 13 pages, 24 figure
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Robotics and Artificial Intelligence (AI) have been inextricably intertwined
since their inception. Today, AI-Robotics systems have become an integral part
of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These
systems are built upon three fundamental architectural elements: perception,
navigation and planning, and control. However, while the integration of
AI-Robotics systems has enhanced the quality our lives, it has also presented a
serious problem - these systems are vulnerable to security attacks. The
physical components, algorithms, and data that make up AI-Robotics systems can
be exploited by malicious actors, potentially leading to dire consequences.
Motivated by the need to address the security concerns in AI-Robotics systems,
this paper presents a comprehensive survey and taxonomy across three
dimensions: attack surfaces, ethical and legal concerns, and Human-Robot
Interaction (HRI) security. Our goal is to provide users, developers and other
stakeholders with a holistic understanding of these areas to enhance the
overall AI-Robotics system security. We begin by surveying potential attack
surfaces and provide mitigating defensive strategies. We then delve into
ethical issues, such as dependency and psychological impact, as well as the
legal concerns regarding accountability for these systems. Besides, emerging
trends such as HRI are discussed, considering privacy, integrity, safety,
trustworthiness, and explainability concerns. Finally, we present our vision
for future research directions in this dynamic and promising field
- …