Mixture of experts on Riemannian manifolds for visual-servoing fixtures

Abstract

Adaptive Virtual Fixtures (VFs) for teleoperation often rely on visual inputs for online adaptation. State estimation from visual detections is never perfect, and thus affects the quality and robustness of adaptation. It is therefore important to be able to quantify how uncertain an estimation from vision is. This can, for example, inform on how to modulate a fixture's stiffness to decrease the physical force a human operator has to apply. Furthermore, the target of a manipulation operation might not be known from the beginning of the task, which creates the need for a principled way to add and remove fixtures when possible targets appear in the robot workspace. In this paper we propose an on-manifold Mixture of Experts (MoE) model that synthesizes visual-servoing fixtures while elegantly handling full pose detection uncertainties and 6D teleoperation goals in a unified framework. An arbitration function allocating the authority between multiple vision-based fixtures arises naturally from the MoE formulation. We show that this approach allows a teleoperator to insert multiple printed circuit boards (PCBs) with high precision without requiring the manual design of VFs to guide the robot motion. An exemplary video visualizing the probability distribution resulting from our model is available at: https://youtu.be/GKMQvbJ5Oz

    Similar works