71 research outputs found
Recommended from our members
State representation learning with recurrent capsule networks
Unsupervised learning of compact and relevant state representations has beenproved very useful at solving complex reinforcement learning tasks Ha and Schmid-huber (2018). In this paper, we propose a recurrent capsule network Hinton et al.(2011) that learns such representations by trying to predict the future observationsin an agent’s trajector
Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal
Model-free reinforcement learning has recently been shown to be effective at
learning navigation policies from complex image input. However, these
algorithms tend to require large amounts of interaction with the environment,
which can be prohibitively costly to obtain on robots in the real world. We
present an approach for efficiently learning goal-directed navigation policies
on a mobile robot, from only a single coverage traversal of recorded data. The
navigation agent learns an effective policy over a diverse action space in a
large heterogeneous environment consisting of more than 2km of travel, through
buildings and outdoor regions that collectively exhibit large variations in
visual appearance, self-similarity, and connectivity. We compare pretrained
visual encoders that enable precomputation of visual embeddings to achieve a
throughput of tens of thousands of transitions per second at training time on a
commodity desktop computer, allowing agents to learn from millions of
trajectories of experience in a matter of hours. We propose multiple forms of
computationally efficient stochastic augmentation to enable the learned policy
to generalise beyond these precomputed embeddings, and demonstrate successful
deployment of the learned policy on the real robot without fine tuning, despite
environmental appearance differences at test time. The dataset and code
required to reproduce these results and apply the technique to other datasets
and robots is made publicly available at rl-navigation.github.io/deployable
Recommended from our members
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatlandis a simple, lightweight environment for fastprototyping and testing of reinforcement learning agents. It is oflower complexity compared to similar 3D platforms (e.g. Deep-Mind Lab or VizDoom), but emulates physical properties of thereal world, such as continuity, multi-modal partially-observablestates with first-person view and coherent physics. We proposeto use it as an intermediary benchmark for problems related toLifelong Learning.Flatlandis highly customizable and offers awide range of task difficulty to extensively evaluate the propertiesof artificial agents. We experiment with three reinforcementlearning baseline agents and show that they can rapidly solvea navigation task inFlatland. A video of an agent acting inFlatlandis available here: https://youtu.be/I5y6Y2ZypdA
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatland is a simple, lightweight environment for fast prototyping and
testing of reinforcement learning agents. It is of lower complexity compared to
similar 3D platforms (e.g. DeepMind Lab or VizDoom), but emulates physical
properties of the real world, such as continuity, multi-modal
partially-observable states with first-person view and coherent physics. We
propose to use it as an intermediary benchmark for problems related to Lifelong
Learning. Flatland is highly customizable and offers a wide range of task
difficulty to extensively evaluate the properties of artificial agents. We
experiment with three reinforcement learning baseline agents and show that they
can rapidly solve a navigation task in Flatland. A video of an agent acting in
Flatland is available here: https://youtu.be/I5y6Y2ZypdA.Comment: Accepted to the Workshop on Continual Unsupervised Sensorimotor
Learning (ICDL-EpiRob 2018
- …