Article thumbnail

Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations

By Jonathan Tremblay, Thang To, Artem Molchanov, Stephen Tyree, Jan Kautz and Stan Birchfield

Abstract

We present a system to infer and execute a human-readable program from a real-world demonstration. The system consists of a series of neural networks to perform perception, program generation, and program execution. Leveraging convolutional pose machines, the perception network reliably detects the bounding cuboids of objects in real images even when severely occluded, after training only on synthetic images using domain randomization. To increase the applicability of the perception network to new scenarios, the network is formulated to predict in image space rather than in world space. Additional networks detect relationships between objects, generate plans, and determine actions to reproduce a real-world demonstration. The networks are trained entirely in simulation, and the system is tested in the real world on the pick-and-place problem of stacking colored cubes using a Baxter robot.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018. For associated video, see https://youtu.be/B7ZT5oSnRy

Topics: Computer Science - Robotics
Year: 2018
OAI identifier: oai:arXiv.org:1805.07054

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.