118 research outputs found

    Unsupervised Emergence of Egocentric Spatial Structure from Sensorimotor Prediction

    Get PDF
    Despite its omnipresence in robotics application, the nature of spatial knowledgeand the mechanisms that underlie its emergence in autonomous agents are stillpoorly understood. Recent theoretical works suggest that the Euclidean structure ofspace induces invariants in an agent’s raw sensorimotor experience. We hypothesizethat capturing these invariants is beneficial for sensorimotor prediction and that,under certain exploratory conditions, a motor representation capturing the structureof the external space should emerge as a byproduct of learning to predict futuresensory experiences. We propose a simple sensorimotor predictive scheme, applyit to different agents and types of exploration, and evaluate the pertinence of thesehypotheses. We show that a naive agent can capture the topology and metricregularity of its sensor’s position in an egocentric spatial frame without any a prioriknowledge, nor extraneous supervision

    Towards Disentangled Representations via Variational Sparse Coding

    Get PDF
    International audienceWe present a framework for learning disentangled representations with variational autoencoders in an unsupervised manner, which explicitly imposes sparsity and interpretability of the latent encodings. Leveraging ideas from Sparse Coding models, we consider the Spike and Slab prior distribution for the latent variables, and a modification of the ELBO, inspired by β-VAE model to enforce decomposability over the latent representation. We run our proposed model in a variety of quantitative and qualitative experiments for MNIST, Fashion-MNIST, CelebA and dSprites datasets, showing that the framework disentangles the latent space in continuous sparse interpretable factors and is competitive with current disentangling models

    A Commentary on the Unsupervised Learning of Disentangled Representations

    Full text link
    The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision. In this paper, we summarize the results of Locatello et al., 2019, and focus on their implications for practitioners. We discuss the theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases and the practical challenges it entails. Finally, we comment on our experimental findings, highlighting the limitations of state-of-the-art approaches and directions for future research
    • …
    corecore