Visually guided grasping to study teleprogrammation within the BAROCO testbed

Abstract

This paper describes vision functionalities required in future orbital laboratories; in such systems, robots will be needed in order to execute the on-board scientific experiments or servicing and maintenance tasks under the remote control of ground operators. For this sake, ESA has proposed a robotic configuration called EMATS; a testbed has been developed by ESTEC in order to evaluate the potentialities of EMATS-like robot to execute scientific tasks in automatic mode. For the same context, CNES develops the BAROCO testbed to investigate remote control and teleprogrammation, in which high level primitives like 'Pick Object A' are provided as basic primitives. In nominal situations, the system has an a priori knowledge about the position of all objects. These positions are not very accurate, but this knowledge is sufficient in order to predict the position of the object which must be grasped, with respect to the manipulator frame. Vision is required in order to insure a correct grasping and to guarantee a good accuracy for the following operations. We describe our results about a visually guided grasping of static objects. It seems to be a very classical problem, and a lot of results are available. But, in many cases, it lacks a realistic evaluation of the accuracy, because such an evaluation requires tedious experiments. We propose several results about calibration of the experimental testbed, recognition algorithms required to locate a 3D polyhedral object, and the grasping itself

    Similar works