A Vision-Based Learning Method for Pushing Manipulation

Abstract

We describe an unsupervised on-line method for learning of manipulative actions that allows a robot to push an object connected to it with a rotational point contact to a desired point in image-space. By observing the results of its actions on the object\u27s orientation in image-space, the system forms a predictive forward empirical model. This acquired model is used on-line for manipulation planning and control as it improves. Rather than explicitly inverting the forward model to achieve trajectory control, a stochastic action selection technique [Moore, 1990] is used to select the most informative and promising actions, thereby integrating active perception and learning by combining on-line improvement, task-directed exploration, and model exploitation. Simulation and experimental results of the approach are presented

    Similar works