Enabling autonomous robots to interact in unstructured environments with
dynamic objects requires manipulation capabilities that can deal with clutter,
changes, and objects' variability. This paper presents a comparison of
different reinforcement learning-based approaches for object picking with a
robotic manipulator. We learn closed-loop policies mapping depth camera inputs
to motion commands and compare different approaches to keep the problem
tractable, including reward shaping, curriculum learning and using a policy
pre-trained on a task with a reduced action set to warm-start the full problem.
For efficient and more flexible data collection, we train in simulation and
transfer the policies to a real robot. We show that using curriculum learning,
policies learned with a sparse reward formulation can be trained at similar
rates as with a shaped reward. These policies result in success rates
comparable to the policy initialized on the simplified task. We could
successfully transfer these policies to the real robot with only minor
modifications of the depth image filtering. We found that using a heuristic to
warm-start the training was useful to enforce desired behavior, while the
policies trained from scratch using a curriculum learned better to cope with
unseen scenarios where objects are removed.Comment: 8 pages, video available at https://youtu.be/ii16Zejmf-