5 research outputs found
Visual Prediction of Priors for Articulated Object Interaction
Exploration in novel settings can be challenging without prior experience in
similar domains. However, humans are able to build on prior experience quickly
and efficiently. Children exhibit this behavior when playing with toys. For
example, given a toy with a yellow and blue door, a child will explore with no
clear objective, but once they have discovered how to open the yellow door,
they will most likely be able to open the blue door much faster. Adults also
exhibit this behavior when entering new spaces such as kitchens. We develop a
method, Contextual Prior Prediction, which provides a means of transferring
knowledge between interactions in similar domains through vision. We develop
agents that exhibit exploratory behavior with increasing efficiency, by
learning visual features that are shared across environments, and how they
correlate to actions. Our problem is formulated as a Contextual Multi-Armed
Bandit where the contexts are images, and the robot has access to a
parameterized action space. Given a novel object, the objective is to maximize
reward with few interactions. A domain which strongly exhibits correlations
between visual features and motion is kinemetically constrained mechanisms. We
evaluate our method on simulated prismatic and revolute joints.Comment: IEEE International Conference on Robotics and Automation (ICRA) 202