Active Affordance Learning in Continuous State and Action Spaces

Abstract

Learning object affordances and manipulation skills is essential for developing cognitive service robots. We propose an active affordance learning approach in continuous state and action spaces without manual discretization of states or exploratory motor primitives. During exploration in the action space, the robot learns a forward model to predict action effects. It simultaneously updates the active exploration policy through reinforcement learning, whereby the prediction error serves as the intrinsic reward. By using the learned forward model, motor skills are obtained in a bottom-up manner to achieve goal states of an object. We demonstrate that a humanoid robot NAO is able to learn how to manipulate garbage cans with different lids by using different motor skills.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 09/03/2017