22,595 research outputs found

    Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies

    Full text link
    Deep learning has become an increasingly common technique for various control problems, such as robotic arm manipulation, robot navigation, and autonomous vehicles. However, the downside of using deep neural networks to learn control policies is their opaque nature and the difficulties of validating their safety. As the networks used to obtain state-of-the-art results become increasingly deep and complex, the rules they have learned and how they operate become more challenging to understand. This presents an issue, since in safety-critical applications the safety of the control policy must be ensured to a high confidence level. In this paper, we propose an automated black box testing framework based on adversarial reinforcement learning. The technique uses an adversarial agent, whose goal is to degrade the performance of the target model under test. We test the approach on an autonomous vehicle problem, by training an adversarial reinforcement learning agent, which aims to cause a deep neural network-driven autonomous vehicle to collide. Two neural networks trained for autonomous driving are compared, and the results from the testing are used to compare the robustness of their learned control policies. We show that the proposed framework is able to find weaknesses in both control policies that were not evident during online testing and therefore, demonstrate a significant benefit over manual testing methods.Comment: 2020 IEEE International Conference on Robotics and Automation (ICRA

    QC-SANE: Robust Control in DRL using Quantile Critic with Spiking Actor and Normalized Ensemble

    Get PDF
    Recently Introduced Deep Reinforcement Learning (DRL) Techniques in Discrete-Time Have Resulted in Significant Advances in Online Games, Robotics, and So On. Inspired from Recent Developments, We Have Proposed an Approach Referred to as Quantile Critic with Spiking Actor and Normalized Ensemble (QC-SANE) for Continuous Control Problems, Which Uses Quantile Loss to Train Critic and a Spiking Neural Network (NN) to Train an Ensemble of Actors. the NN Does an Internal Normalization using a Scaled Exponential Linear Unit (SELU) Activation Function and Ensures Robustness. the Empirical Study on Multijoint Dynamics with Contact (MuJoCo)-Based Environments Shows Improved Training and Test Results Than the State-Of-The-Art Approach: Population Coded Spiking Actor Network (PopSAN)

    A Policy Search Method For Temporal Logic Specified Reinforcement Learning Tasks

    Full text link
    Reward engineering is an important aspect of reinforcement learning. Whether or not the user's intentions can be correctly encapsulated in the reward function can significantly impact the learning outcome. Current methods rely on manually crafted reward functions that often require parameter tuning to obtain the desired behavior. This operation can be expensive when exploration requires systems to interact with the physical world. In this paper, we explore the use of temporal logic (TL) to specify tasks in reinforcement learning. TL formula can be translated to a real-valued function that measures its level of satisfaction against a trajectory. We take advantage of this function and propose temporal logic policy search (TLPS), a model-free learning technique that finds a policy that satisfies the TL specification. A set of simulated experiments are conducted to evaluate the proposed approach
    • …
    corecore