Vision based virtual fixture generation for teleoperated robotic manipulation

Abstract

In this paper we present a vision-based system for online virtual fixture generation suitable for manipulation tasks using remote controlled robots. This system makes use of a stereo camera system which provides accurate pose estimation of parts within the surrounding environment of the robot using features detection algorithms. The proposed approach is suitable for fast adaptation of the teleoperation system to different manipulation tasks without the need of tedious reimplementation of virtual constraints. Our main goal is to improve the efficiency of bilateral teleoperation systems by reducing the human operator effort in programming the system. In fact, using this method virtual guidances do not need to be programmed a priori but they can be instead dynamically generated on-the-fly and updated at any time making, in the end, the system suitable for any unstructured environment. In addition, this methodology is easily adaptable to any kind of teleoperation system since it is independent from the used master/slave robots. In order to validate our approach we performed a series of experiments in an emulated industrial scenario. We show how through the use of our approach a generic telemanipulation task can be easily accomplished without influencing the transparency of the system

    Similar works