4 research outputs found

    Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding

    Full text link
    As adversarial attacks pose a serious threat to the security of AI system in practice, such attacks have been extensively studied in the context of computer vision applications. However, few attentions have been paid to the adversarial research on automatic path finding. In this paper, we show dominant adversarial examples are effective when targeting A3C path finding, and design a Common Dominant Adversarial Examples Generation Method (CDG) to generate dominant adversarial examples against any given map. In addition, we propose Gradient Band-based Adversarial Training, which trained with a single randomly choose dominant adversarial example without taking any modification, to realize the "1:N" attack immunity for generalized dominant adversarial examples. Extensive experimental results show that, the lowest generation precision for CDG algorithm is 91.91%, and the lowest immune precision for Gradient Band-based Adversarial Training is 93.89%, which can prove that our method can realize the generalized attack immunity of A3C path finding with a high confidence.Comment: 25 pages 14 figure

    A Training-based Identification Approach to VIN Adversarial Examples

    Full text link
    With the rapid development of Artificial Intelligence (AI), the problem of AI security has gradually emerged. Most existing machine learning algorithms may be attacked by adversarial examples. An adversarial example is a slightly modified input sample that can lead to a false result of machine learning algorithms. The adversarial examples pose a potential security threat for many AI application areas, especially in the domain of robot path planning. In this field, the adversarial examples obstruct the algorithm by adding obstacles to the normal maps, resulting in multiple effects on the predicted path. However, there is no suitable approach to automatically identify them. To our knowledge, all previous work uses manual observation method to estimate the attack results of adversarial maps, which is time-consuming. Aiming at the existing problem, this paper explores a method to automatically identify the adversarial examples in Value Iteration Networks (VIN), which has a strong generalization ability. We analyze the possible scenarios caused by the adversarial maps. We propose a training-based identification approach to VIN adversarial examples by combing the path feature comparison and path image classification. We evaluate our method using the adversarial maps dataset, show that our method can achieve a high-accuracy and faster identification than manual observation method

    Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

    Full text link
    A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises. Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions. Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles. We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled policy regularization which can be applied to a large family of DRL algorithms, including proximal policy optimization (PPO), deep deterministic policy gradient (DDPG) and deep Q networks (DQN), for both discrete and continuous action control problems. We significantly improve the robustness of PPO, DDPG and DQN agents under a suite of strong white box adversarial attacks, including new attacks of our own. Additionally, we find that a robust policy noticeably improves DRL performance even without an adversary in a number of environments. Our code is available at https://github.com/chenhongge/StateAdvDRL.Comment: Huan Zhang and Hongge Chen contributed equall

    Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning

    Full text link
    Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in quickly adapting to the surrounding environments. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. Thus, this paper provides a comprehensive survey that discusses emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks. We first cover some fundamental backgrounds about DRL and present emerging adversarial attacks on machine learning techniques. We then investigate more details of the vulnerabilities that the adversary can exploit to attack DRL along with the state-of-the-art countermeasures to prevent such attacks. Finally, we highlight open issues and research challenges for developing solutions to deal with attacks for DRL-based intelligent systems
    corecore