104,408 research outputs found

    Hand-eye calibration for robotic assisted minimally invasive surgery without a calibration object

    Get PDF
    In a robot mounted camera arrangement, handeye calibration estimates the rigid relationship between the robot and camera coordinate frames. Most hand-eye calibration techniques use a calibration object to estimate the relative transformation of the camera in several views of the calibration object and link these to the forward kinematics of the robot to compute the hand-eye transformation. Such approaches achieve good accuracy for general use but for applications such as robotic assisted minimally invasive surgery, acquiring a calibration sequence multiple times during a procedure is not practical. In this paper, we present a new approach to tackle the problem by using the robotic surgical instruments as the calibration object with well known geometry from CAD models used for manufacturing. Our approach removes the requirement of a custom sterile calibration object to be used in the operating room and it simplifies the process of acquiring calibration data when the laparoscope is constrained to move around a remote centre of motion. This is the first demonstration of the feasibility to perform hand-eye calibration using components of the robotic system itself and we show promising validation results on synthetic data as well as data acquired with the da Vinci Research Kit

    Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image using CNN

    Full text link
    Collaborative robots are becoming more common on factory floors as well as regular environments, however, their safety still is not a fully solved issue. Collision detection does not always perform as expected and collision avoidance is still an active research area. Collision avoidance works well for fixed robot-camera setups, however, if they are shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing collision avoidance algorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimating its position, including individual joints, by using a simple 2D colour image as an input, where no Eye-to-Hand calibration is needed. As an extension of previous work, a two-stage transfer learning approach is used to re-train a multi-objective convolutional neural network (CNN) to allow it to be used with heterogeneous robot arms. Our method is capable of detecting the robot in real-time and new robot types can be added by having significantly smaller training datasets compared to the requirements of a fully trained network. We present data collection approach, the structure of the multi-objective CNN, the two-stage transfer learning training and test results by using real robots from Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible application areas of our method together with the possible improvements.Comment: 6+n pages, ICRA 2019 submissio

    Investigating deep-learning-based solutions for flexible and robust hand-eye calibration in robotics

    Get PDF
    The cameras are the main sensor for robots to perceive their environments because they provide high-quality information and their low-cost. However, transforming the information obtained from cameras into robotic actions can be challenging. To manipulate objects in camera scenes, robots need to establish a transformation between the camera and the robot base, which is known as hand-eye calibration. Achieving accurate hand-eye calibration is critical for precise robotic manipulation, yet traditional approaches can be time-consuming, error-prone, and fail to account for changes in the camera or robot base over time. This thesis proposes a novel approach that leverages the power of deep learning to automatically learn the mapping between the robot’s joint angles and the camera’s images, enabling real-time calibration updates. The approach samples the robot and camera spaces discretely and represents them continuously, enabling efficient and accurate computation of calibration parameters. By automating the calibration process and using deep learning algorithms, a more robust and efficient solution for hand-eye calibration in robotics is offered. To develop a robust and flexible hand-eye calibration approach, three main studies were conducted. In the first study, a deep learning-based regression architecture was developed that processes RGB and depth images, as well as the poses of a single reference point selected on the robot end-effector with respect to the robot base acquired through the robot kinematic chain. The success of this architecture was tested in a simulated environment and two real robotic environments, evaluating the metric error and precision. In the second approach, the success of the developed approach was evaluated by transferring from metric error to task error by performing a real robotic manipulation task, specifically a pick-and-place. Additionally, the performance of the developed approach was compared with a classic hand-eye calibration approach, using three evaluation criteria: real robotic manipulation task, computational complexity, and repeatability. Finally, the learned calibration space of the developed deep learning-based hand-eye calibration approach was extended with new observations over time using Continual learning, making the approach more robust and flexible in handling environmental changes. Two buffer-based approaches were developed to eliminate the catastrophic forgetting problem, which is forgetting learned information over time by considering new observations. The performance and comparison of these approaches with the training of the developed approach in the first study using all datasets from scratch were tested on a simulated and a real-world environment. Experimental results of this thesis reveal that: 1) a deep learning-based hand-eye calibration approach has competitive results with the classical approaches in terms of metric error (positional and rotational error deviation from the ground-truth) while eliminating data re-collection and re-training camera pose changes over time, and has 96 times better repeatability (precision) than the classic approach as well as it has the state-of-the-art result for it in comparison to the other deep learning-based hand-eye calibration approaches; 2) it also has competitive results with the classic approaches for performing a real-robotic manipulation task and reduces the computational complexity; 3) the leveraging deep-learning based hand-eye calibration approach with Continual Learning, it is possible to extend the learned calibration space over new observations without training the network from scratch with a lower accuracy gap (less than 1.5 mm and 2.5 degrees in the simulations and real-world environments for the translation and orientation components). Overall, the proposed approach offers a more efficient and robust solution for hand-eye calibration in robotics, providing greater accuracy and flexibility to adapt to environments where the poses of the robot and camera base change according to each other over time. These changes may come from either robot or camera movement. The results of the studies demonstrate the effectiveness of the approach in achieving precise and reliable robotic manipulation, making it a promising solution for robotics applications

    Adjoint Transformation Algorithm for Hand-Eye Calibration with Applications in Robotic Assisted Surgery

    Get PDF
    Hand-eye calibration aims at determining the unknown rigid transformation between the coordinate systems of a robot arm and a camera. Existing hand-eye algorithms using closed-form solutions followed by iterative non-linear refinement provide accurate calibration results within a broad range of robotic applications. However, in the context of surgical robotics hand-eye calibration is still a challenging problem due to the required accuracy within the millimetre range, coupled with a large displacement between endoscopic cameras and the robot end-effector. This paper presents a new method for hand-eye calibration based on the adjoint transformation of twist motions that solves the problem iteratively through alternating estimations of rotation and translation. We show that this approach converges to a solution with a higher accuracy than closed form initializations within a broad range of synthetic and real experiments. We also propose a stereo hand-eye formulation that can be used in the context of both our proposed method and previous state-of-the-art closed form solutions. Experiments with real data are conducted with a stereo laparoscope, the KUKA robot arm manipulator, and the da Vinci surgical robot, showing that both our new alternating solution and the explicit representation of stereo camera hand-eye relations contribute to a higher calibration accuracy

    Hand-eye calibration with a remote centre of motion

    Get PDF
    In the eye-in-hand robot configuration, hand-eye calibration plays a vital role in completing the link between the robot and camera coordinate systems. Calibration algorithms are mature and provide accurate transformation estimations for an effective camera-robot link but rely on a sufficiently wide range of calibration data to avoid errors and degenerate configurations. This can be difficult in the context of keyhole surgical robots because they are mechanically constrained to move around a remote centre of motion (RCM) which is located at the trocar port. The trocar limits the range of feasible calibration poses that can be obtained and results in ill-conditioned hand-eye constraints. In this letter, we propose a new approach to deal with this problem by incorporating the RCM constraints into the hand-eye formulation. We show that this not only avoids ill-conditioned constraints but is also more accurate than classic hand-eye calibration with a free 6DoF motion, due to solving simpler equations that take advantage of the reduced DoF. We validate our method using simulation to test numerical stability and a physical implementation on an RCM constrained KUKA LBR iiwa 14 R820 equipped with a NanEye stereo camera

    EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable Rendering and Space Exploration

    Full text link
    Hand-eye calibration is a critical task in robotics, as it directly affects the efficacy of critical operations such as manipulation and grasping. Traditional methods for achieving this objective necessitate the careful design of joint poses and the use of specialized calibration markers, while most recent learning-based approaches using solely pose regression are limited in their abilities to diagnose inaccuracies. In this work, we introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and offers comprehensive coverage of positioning accuracy across the entire robot configuration space. We introduce two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration, which enables accurate end-to-end optimization of the calibration process and eliminates the need for the laborious manual design of robot joint poses. Our evaluation demonstrates superior performance in synthetic and real-world datasets, enhancing downstream manipulation tasks by providing precise camera poses for locating and interacting with objects. The code is available at the project page: https://ootts.github.io/easyhec.Comment: Project page: https://ootts.github.io/easyhe

    Visual servoing in robotic manufacturing systems for accurate positioning

    Get PDF
    Automated robotic manufacturing systems require accurate robot positioning. Visual servoing is an increasing popular method to enhance such positioning accuracy. Based on the error signal definition, visual servoing is classified into three approaches, Position Based Visual Servoing (PBVS), Image Based Visual Servoing (IBVS) and Hybrid Visual Servoing (HVS). In this research, firstly, a novel Neural Network (NN) based hand-eye calibration is introduced in PBVS. A MultiLayer Perceptron NN is used to approximate the nonlinear coordinate transform from image coordinates to real world coordinates in visual servoing. The main advantages of NN based hand-eye calibration are that it can solve the hand-eye calibration problem without estimating the hand-eye transformation and can improve the object tracking accuracy as well. The experimental results in an industrial manufacturing robot show that the proposed calibration method outperforms the current solving transformation matrix method and free hand-eye calibration method for 2D object tracking. Secondly, a new approach to switching control of IBVS with laser pointer is proposed. The simple off-the-shelf laser pointer is applied to realize the depth estimation. The proposed system is robust to the camera calibration and hand-eye calibration error, and is object model free as well. Comparing with traditional IBVS, it avoids image singularities and image local minima, and is successful for only partial image features in the field of view. Moreover, the trajectory of the robot end effector is shortened. The experimental results are given to verify the effectiveness of the proposed method in a robotic manufacturing system for assembl

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    Calibration of an active vision system and feature tracking based on 8-point projective invariants.

    Get PDF
    by Chen Zhi-Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references.List of Symbols S --- p.1Chapter Chapter 1 --- IntroductionChapter 1.1 --- Active Vision Paradigm and Calibration of Active Vision System --- p.1.1Chapter 1.1.1 --- Active Vision Paradigm --- p.1.1Chapter 1.1.2 --- A Review of the Existing Active Vision Systems --- p.1.1Chapter 1.1.3 --- A Brief Introduction to Our Active Vision System --- p.1.2Chapter 1.1.4 --- The Stages of Calibrating an Active Vision System --- p.1.3Chapter 1.2 --- Projective Invariants and Their Applications to Feature Tracking --- p.1.4Chapter 1.3 --- Thesis Overview --- p.1.4References --- p.1.5Chapter Chapter 2 --- Calibration for an Active Vision System: Camera CalibrationChapter 2.1 --- An Overview of Camera Calibration --- p.2.1Chapter 2.2 --- Tsai's RAC Based Camera Calibration Method --- p.2.5Chapter 2.2.1 --- The Pinhole Camera Model with Radial Distortion --- p.2.7Chapter 2.2.2 --- Calibrating a Camera Using Mono view Noncoplanar Points --- p.2.10Chapter 2.3 --- Reg Willson's Implementation of R. Y. Tsai's RAC Based Camera Calibration Algorithm --- p.2.15Chapter 2.4 --- Experimental Setup and Procedures --- p.2.20Chapter 2.5 --- Experimental Results --- p.2.23Chapter 2.6 --- Conclusion --- p.2.28References --- p.2.29Chapter Chapter 3 --- Calibration for an Active Vision System: Head-Eye CalibrationChapter 3.1 --- Why Head-Eye Calibration --- p.3.1Chapter 3.2 --- Review of the Existing Head-Eye Calibration Algorithms --- p.3.1Chapter 3.2.1 --- Category I Classic Approaches --- p.3.1Chapter 3.2.2 --- Category II Self-Calibration Techniques --- p.3.2Chapter 3.3 --- R.Tsai's Approach for Hand-Eye (Head-Eye) Calibration --- p.3.3Chapter 3.3.1 --- Introduction --- p.3.3Chapter 3.3.2 --- Definitions of Coordinate Frames and Homogeoeous Transformation Matrices --- p.3.3Chapter 3.3.3 --- Formulation of the Head-Eye Calibration Problem --- p.3.6Chapter 3.3.4 --- Using Principal Vector to Represent Rotation Transformation Matrix --- p.3.7Chapter 3.3.5 --- Calculating R cg and Tcg --- p.3.9Chapter 3.4 --- Our Local Implementation of Tsai's Head Eye Calibration Algorithm --- p.3.14Chapter 3.4.1 --- Using Denavit - Hartternberg's Approach to Establish a Body-Attached Coordinate Frame for Each Link of the Manipulator --- p.3.16Chapter 3.5 --- Function of Procedures and Formats of Data Files --- p.3.23Chapter 3.6 --- Experimental Results --- p.3.26Chapter 3.7 --- Discussion --- p.3.45Chapter 3.8 --- Conclusion --- p.3.46References --- p.3.47Appendix I Procedures --- p.3.48Chapter Chapter 4 --- A New Tracking Method for Shape from Motion Using an Active Vision SystemChapter 4.1 --- Introduction --- p.4.1Chapter 4.2 --- A New Tracking Method --- p.4.1Chapter 4.2.1 --- Our approach --- p.4.1Chapter 4.2.2 --- Using an Active Vision System to Track the Projective Basis Across Image Sequence --- p.4.2Chapter 4.2.3 --- Using Projective Invariants to Track the Remaining Feature Points --- p.4.2Chapter 4.3 --- Using Factorisation Method to Recover Shape from Motion --- p.4.11Chapter 4.4 --- Discussion and Future Research --- p.4.31References --- p.4.32Chapter Chapter 5 --- Experiments on Feature Tracking with 3D Projective InvariantsChapter 5.1 --- 8-point Projective Invariant --- p.5.1Chapter 5.2 --- Projective Invariant Based Tranfer between Distinct Views of a 3-D Scene --- p.5.4Chapter 5.3 --- Transfer Experiments on the Image Sequence of an Calibration Block --- p.5.6Chapter 5.3.1 --- Experiment 1. Real Image Sequence 1 of a Camera Calibration Block --- p.5.6Chapter 5.3.2 --- Experiment 2. Real Image Sequence 2 of a Camera Calibration Block --- p.5.15Chapter 5.3.3 --- Experiment 3. Real Image Sequence 3 of a Camera Calibration Block --- p.5.22Chapter 5.3.4 --- Experiment 4. Synthetic Image Sequence of a Camera Calibration Block --- p.5.27Chapter 5.3.5 --- Discussions on the Experimental Results --- p.5.32Chapter 5.4 --- Transfer Experiments on the Image Sequence of a Human Face Model --- p.5.33References --- p.5.44Chapter Chapter 6 --- Conclusions and Future ResearchesChapter 6.1 --- Contributions and Conclusions --- p.6.1Chapter 6.2 --- Future Researches --- p.6.1Bibliography --- p.B.

    Path Planning for Robust Image-Based Visual Servoing

    Get PDF
    Vision feedback control loop techniques are efficient for a large class of applications but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current measurement and a constant desired one. By using such approach, it is not obvious to introduce any constraint in the realized trajectories and to ensure the convergence for all the initial configurations. In this paper, we propose a new approach to resolve these difficulties by coupling path planning in image space and image-based control. Constraints such that the object remains in the camera field of view or the robot avoids its joint limits can be taken into account at the task planning level. Furthermore, by using this approach, current measurements always remain close to their desired value and a control by image-based servoing ensures the robustness with respect to modeling errors. The proposed method is based on the potential field approach and is applied when object shape and dimensions are known or not, and when the calibration parameters of the camera are well or badly estimated. Finally, real time experimental results using an eye-in-hand robotic system are presented and confirm the validity of our approach
    • …
    corecore