Humans are very skillful in communicating their intent for when and where a
handover would occur. On the other hand, even the state-of-the-art robotic
implementations for handovers display a general lack of communication skills.
We propose visualizing the internal state and intent of robots for
Human-to-Robot Handovers using Augmented Reality. Specifically, we visualize 3D
models of the object and the robotic gripper to communicate the robot's
estimation of where the object is and the pose that the robot intends to grasp
the object. We conduct a user study with 16 participants, in which each
participant handed over a cube-shaped object to the robot 12 times. Results
show that visualizing robot intent using augmented reality substantially
improves the subjective experience of the users for handovers and decreases the
time to transfer the object. Results also indicate that the benefits of
augmented reality are still present even when the robot makes errors in
localizing the object.Comment: 6 pages, 4 Figures, 2 Table