464 research outputs found

    Autonomous vision-guided bi-manual grasping and manipulation

    Get PDF
    This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks

    Robust Cooperative Manipulation without Force/Torque Measurements: Control Design and Experiments

    Full text link
    This paper presents two novel control methodologies for the cooperative manipulation of an object by N robotic agents. Firstly, we design an adaptive control protocol which employs quaternion feedback for the object orientation to avoid potential representation singularities. Secondly, we propose a control protocol that guarantees predefined transient and steady-state performance for the object trajectory. Both methodologies are decentralized, since the agents calculate their own signals without communicating with each other, as well as robust to external disturbances and model uncertainties. Moreover, we consider that the grasping points are rigid, and avoid the need for force/torque measurements. Load distribution is also included via a grasp matrix pseudo-inverse to account for potential differences in the agents' power capabilities. Finally, simulation and experimental results with two robotic arms verify the theoretical findings

    Adaptive Constrained Kinematic Control using Partial or Complete Task-Space Measurements

    Full text link
    Recent advancements in constrained kinematic control make it an attractive strategy for controlling robots with arbitrary geometry in challenging tasks. Most current works assume that the robot kinematic model is precise enough for the task at hand. However, with increasing demands and safety requirements in robotic applications, there is a need for a controller that compensates online for kinematic inaccuracies. We propose an adaptive constrained kinematic control strategy based on quadratic programming, which uses partial or complete task-space measurements to compensate online for calibration errors. Our method is validated in experiments that show increased accuracy and safety compared to a state-of-the-art kinematic control strategy.Comment: Accepted on T-RO 2022, 16 Pages. Corrected a few typos and adjusted figure placemen

    A Graph-based Optimization Framework for Hand-Eye Calibration for Multi-Camera Setups

    Full text link
    Hand-eye calibration is the problem of estimating the spatial transformation between a reference frame, usually the base of a robot arm or its gripper, and the reference frame of one or multiple cameras. Generally, this calibration is solved as a non-linear optimization problem, what instead is rarely done is to exploit the underlying graph structure of the problem itself. Actually, the problem of hand-eye calibration can be seen as an instance of the Simultaneous Localization and Mapping (SLAM) problem. Inspired by this fact, in this work we present a pose-graph approach to the hand-eye calibration problem that extends a recent state-of-the-art solution in two different ways: i) by formulating the solution to eye-on-base setups with one camera; ii) by covering multi-camera robotic setups. The proposed approach has been validated in simulation against standard hand-eye calibration methods. Moreover, a real application is shown. In both scenarios, the proposed approach overcomes all alternative methods. We release with this paper an open-source implementation of our graph-based optimization framework for multi-camera setups.Comment: This paper has been accepted for publication at the 2023 IEEE International Conference on Robotics and Automation (ICRA

    A Comparative Review of Hand-Eye Calibration Techniques for Vision Guided Robots

    Get PDF
    Hand-eye calibration enables proper perception of the environment in which a vision guided robot operates. Additionally, it enables the mapping of the scene in the robots frame. Proper hand-eye calibration is crucial when sub-millimetre perceptual accuracy is needed. For example, in robot assisted surgery, a poorly calibrated robot would cause damage to surrounding vital tissues and organs, endangering the life of a patient. A lot of research has gone into ways of accurately calibrating the hand-eye system of a robot with different levels of success, challenges, resource requirements and complexities. As such, academics and industrial practitioners are faced with the challenge of choosing which algorithm meets the implementation requirements based on the identified constraints. This review aims to give a general overview of the strengths and weaknesses of different hand-eye calibration algorithms available to academics and industrial practitioners to make an informed design decision, as well as incite possible areas of research based on the identified challenges. We also discuss different calibration targets which is an important part of the calibration process that is often overlooked in the design process

    Hand-eye calibration with a remote centre of motion

    Get PDF
    In the eye-in-hand robot configuration, hand-eye calibration plays a vital role in completing the link between the robot and camera coordinate systems. Calibration algorithms are mature and provide accurate transformation estimations for an effective camera-robot link but rely on a sufficiently wide range of calibration data to avoid errors and degenerate configurations. This can be difficult in the context of keyhole surgical robots because they are mechanically constrained to move around a remote centre of motion (RCM) which is located at the trocar port. The trocar limits the range of feasible calibration poses that can be obtained and results in ill-conditioned hand-eye constraints. In this letter, we propose a new approach to deal with this problem by incorporating the RCM constraints into the hand-eye formulation. We show that this not only avoids ill-conditioned constraints but is also more accurate than classic hand-eye calibration with a free 6DoF motion, due to solving simpler equations that take advantage of the reduced DoF. We validate our method using simulation to test numerical stability and a physical implementation on an RCM constrained KUKA LBR iiwa 14 R820 equipped with a NanEye stereo camera
    corecore