194 research outputs found

    Feudal Networks for Hierarchical Reinforcement Learning Revisited

    Get PDF
    Hierarchical Reinforcement Learning (RL) has gained popularity in recent years in designing RL algorithms that converge in complex environments. Convergence of RL algorithms remains an active area of research, and no single approach has been found to work for all RL applications. Feudal networks (FuNs) are a hierarchical RL technique attempting to address portability and other problems by defining an internal structure for an RL agent using a Manager-Worker hierarchy. A Manager is that portion of the system utilizing a low temporal resolution component for setting goals to maximize rewards from the environment, while the Worker utilizes a high temporal resolution component for selecting among action primitives to maximize rewards from the Manager. This thesis provides an overview of reinforcement learning and the FuN architecture, then compares the relative convergence rates of untrained FuNs to FuNs constructed by Workers with different physical embodiments under a trained Manager

    Crawling in Rogue's dungeons with (partitioned) A3C

    Full text link
    Rogue is a famous dungeon-crawling video-game of the 80ies, the ancestor of its gender. Rogue-like games are known for the necessity to explore partially observable and always different randomly-generated labyrinths, preventing any form of level replay. As such, they serve as a very natural and challenging task for reinforcement learning, requiring the acquisition of complex, non-reactive behaviors involving memory and planning. In this article we show how, exploiting a version of A3C partitioned on different situations, the agent is able to reach the stairs and descend to the next level in 98% of cases.Comment: Accepted at the Fourth International Conference on Machine Learning, Optimization, and Data Science (LOD 2018

    Dot-to-Dot: Explainable Hierarchical Reinforcement Learning for Robotic Manipulation

    Full text link
    Robotic systems are ever more capable of automation and fulfilment of complex tasks, particularly with reliance on recent advances in intelligent systems, deep learning and artificial intelligence. However, as robots and humans come closer in their interactions, the matter of interpretability, or explainability of robot decision-making processes for the human grows in importance. A successful interaction and collaboration will only take place through mutual understanding of underlying representations of the environment and the task at hand. This is currently a challenge in deep learning systems. We present a hierarchical deep reinforcement learning system, consisting of a low-level agent handling the large actions/states space of a robotic system efficiently, by following the directives of a high-level agent which is learning the high-level dynamics of the environment and task. This high-level agent forms a representation of the world and task at hand that is interpretable for a human operator. The method, which we call Dot-to-Dot, is tested on a MuJoCo-based model of the Fetch Robotics Manipulator, as well as a Shadow Hand, to test its performance. Results show efficient learning of complex actions/states spaces by the low-level agent, and an interpretable representation of the task and decision-making process learned by the high-level agent
    • …
    corecore