32 research outputs found

    Active, uncalibrated visual servoing

    Get PDF
    Proposes a method for visual control of a robotic system which does not require the formulation of an explicit calibration between image space and the world coordinate system. Calibration is known to be a difficult and error prone process. By extracting control information directly from the image, the authors free their technique from the errors normally associated with a fixed calibration. The authors demonstrate this by performing a peg-in-hole alignment using an uncalibrated camera to control the positioning of the peg. The algorithm utilizes feedback from a simple geometric effect, rotational invariance, to control the positioning servo loop. The method uses an approximation to the image Jacobian to provide smooth, near-continuous control

    Visual Control System for Robotic Welding

    Get PDF

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ

    Asservissement visuel d'un bras robotique en l'absence d'informations géométriques

    Get PDF
    National audienceCet article étend les résultats classiques d'asservissement visuel au cas non calibre. Ces extensions sont l'utilisation de paramètres physiques de caméra approximativement connus, l'hypothèse que la structure de la scène observée est inconnue et que l'on est capable de donner une estimation de la norme de la translation pince/caméra. La méthode utilisée pour la commande du robot est la régulation par fonction de tâche, la tâche a effectuer étant le déplacement d'une caméra vis-à-vis d'un objet cible. Cette méthode utilise le Jacobien de l'image qui dépend de la distance entre la camera et la cible ainsi que de la structure euclidienne de cette dernière. Jusqu'a présent, seules étaient utilisées des approximations de ce Jacobien. Le principal apport du travail effectué est l'inclusion du calcul du Jacobien par reconstruction euclidienne et de la transformation pince/camera dans la structure de contrôle. La méthode adoptée pour la reconstruction est une méthode itérative par approximations affines du modèle projectif de camera
    corecore