Robotics affordances, providing information about what actions can be taken
in a given situation, can aid robotics manipulation. However, learning about
affordances requires expensive large annotated datasets of interactions or
demonstrations. In this work, we show active learning can mitigate this problem
and propose the use of uncertainty to drive an interactive affordance discovery
process. We show that our method enables the efficient discovery of visual
affordances for several action primitives, such as grasping, stacking objects,
or opening drawers, strongly improving data efficiency and allowing us to learn
grasping affordances on a real-world setup with an xArm 6 robot arm in a small
number of trials.Comment: Presented at the GMPL workshop @ RSS 202