In this paper, we explore a class of teleoperation problems where a user controls a sophisticated device (e.g. a robot) via an interface to perform a complex task. Teleoperation interfaces are fundamentally limited by the indirectness of the process, by the fact that the user is not physically executing the task. In this work, we study intelligent and customizable interfaces: these are interfaces that mediate the consequences of indirectness and make teleoperation more seamless. They are intelligent in that they take advantage of the robot’s autonomous capabilities and assist in accomplishing the task. They are customizable in that they enable the users to adapt the retargetting function which maps their input onto the robot. Our studies support the advantages of such interfaces, but also point out the challenges they bring. We make three key observations. First, although assistance can greatly improve teleperation, the decision on how to provide assistance must be contextual. It must depend, for example, on the robot’s confidence in its prediction of the user’s intent. Second, although users do have the ability to provide intent-expressive input that simplifies the robot’s prediction task, this ability can be hindered by kinematic differences between themselves and the robot. And third, although interface customization is important, it must be robust to poor examples from the user
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.