99 research outputs found
Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations
In this work we explore the generalization characteristics of unsupervised
representation learning by leveraging disentangled VAE's to learn a useful
latent space on a set of relational reasoning problems derived from Raven
Progressive Matrices. We show that the latent representations, learned by
unsupervised training using the right objective function, significantly
outperform the same architectures trained with purely supervised learning,
especially when it comes to generalization
Combined Reinforcement Learning via Abstract Representations
In the quest for efficient and robust reinforcement learning methods, both
model-free and model-based approaches offer advantages. In this paper we
propose a new way of explicitly bridging both approaches via a shared
low-dimensional learned encoding of the environment, meant to capture
summarizing abstractions. We show that the modularity brought by this approach
leads to good generalization while being computationally efficient, with
planning happening in a smaller latent state space. In addition, this approach
recovers a sufficient low-dimensional representation of the environment, which
opens up new strategies for interpretable AI, exploration and transfer
learning.Comment: Accepted to the Thirty-Third AAAI Conference On Artificial
Intelligence, 201
- …