52 research outputs found
Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
Finding a generally accepted formal definition of a disentangled representation in the context of an agent behaving in an environment is an important challenge towards the construction of data-efficient autonomous agents. Higgins et al. (2018) recently proposed Symmetry-Based Disentangled Representation Learning, a definition based on a characterization of symmetries in the environment using group theory. We build on their work and make observations, theoretical and empirical, that lead us to argue that Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries. Our experiments can be reproduced in Colab and the code is available on GitHub
Recommended from our members
Continual State Representation Learning for Reinforcement Learning using Generative Replay
We consider the problem of building a state representation model in a continual fashion. As the environment changes, the aim is to efficiently compress the sensory state's information without losing past knowledge. The learned features are then fed to a Reinforcement Learning algorithm to learn a policy. We propose to use Variational Auto-Encoders for state representation, and Generative Replay, i.e. the use of generated samples, to maintain past knowledge. We also provide a general and statistically sound method for automatic environment change detection. Our method provides efficient state representation as well as forward transfer, and avoids catastrophic forgetting. The resulting model is capable of incrementally learning information without using past data and with a bounded system size
Incremental vision-based topological SLAM
Published versio
Real-time visual loop-closure detection
Published versio
Recommended from our members
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatlandis a simple, lightweight environment for fastprototyping and testing of reinforcement learning agents. It is oflower complexity compared to similar 3D platforms (e.g. Deep-Mind Lab or VizDoom), but emulates physical properties of thereal world, such as continuity, multi-modal partially-observablestates with first-person view and coherent physics. We proposeto use it as an intermediary benchmark for problems related toLifelong Learning.Flatlandis highly customizable and offers awide range of task difficulty to extensively evaluate the propertiesof artificial agents. We experiment with three reinforcementlearning baseline agents and show that they can rapidly solvea navigation task inFlatland. A video of an agent acting inFlatlandis available here: https://youtu.be/I5y6Y2ZypdA
Recommended from our members
Generative Models from the perspective of Continual Learning
Which generative model is the most suitablefor Continual Learning? This paper aims at evaluating andcomparing generative models on disjoint sequential imagegeneration tasks. We investigate how several models learn andforget, considering various strategies: rehearsal, regularization,generative replay and fine-tuning. We used two quantitativemetrics to estimate the generation quality and memory ability.We experiment with sequential tasks on three commonly usedbenchmarks for Continual Learning (MNIST, Fashion MNISTand CIFAR10). We found that among all models, the originalGAN performs best and among Continual Learning strategies,generative replay outperforms all other methods. Even ifwe found satisfactory combinations on MNIST and FashionMNIST, training generative models sequentially on CIFAR10is particularly instable, and remains a challenge. Our code isavailable online
Recommended from our members
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatlandis a simple, lightweight environment for fastprototyping and testing of reinforcement learning agents. It is oflower complexity compared to similar 3D platforms (e.g. Deep-Mind Lab or VizDoom), but emulates physical properties of thereal world, such as continuity, multi-modal partially-observablestates with first-person view and coherent physics. We proposeto use it as an intermediary benchmark for problems related toLifelong Learning.Flatlandis highly customizable and offers awide range of task difficulty to extensively evaluate the propertiesof artificial agents. We experiment with three reinforcementlearning baseline agents and show that they can rapidly solvea navigation task inFlatland. A video of an agent acting inFlatlandis available here: https://youtu.be/I5y6Y2ZypdA
On the Sensory Commutativity of Action Sequences for Embodied Agents
Perception of artificial agents is one the grand challenges of AI research. Deep Learning and data-driven approaches are successful on constrained problems where perception can be learned using supervision, but do not scale to open-worlds. In such case, for autonomous embodied agents with first-person sensors, perception can be learned end-to-end to solve particular tasks. However, literature shows that perception is not a purely passive compression mechanism, and that actions play an important role in the formulation of abstract representations. We propose to study perception for these embodied agents, under the mathematical formalism of group theory in order to make the link between perception and action. In particular, we consider the commutative properties of continuous action sequences with respect to sensory information perceived by such an embodied agent. We introduce the Sensory Commutativity Probability (SCP) criterion which measures how much an agent's degree of freedom affects the environment in embodied scenarios. We show how to compute this criterion in different environments, including realistic robotic setups. We empirically illustrate how SCP and the commutative properties of action sequences can be used to learn about objects in the environment and improve sample-efficiency in Reinforcement Learning
- …