120,336 research outputs found
Hand-Eye Calibration
Whenever a sensor is mounted on a robot hand it is important to know the
relationship between the sensor and the hand. The problem of determining this
relationship is referred to as hand-eye calibration, which is important in at
least two types of tasks: (i) map sensor centered measurements into the robot
workspace and (ii) allow the robot to precisely move the sensor. In the past
some solutions were proposed in the particular case of a camera. With almost no
exception, all existing solutions attempt to solve the homogeneous matrix
equation AX=XB. First we show that there are two possible formulations of the
hand-eye calibration problem. One formulation is the classical one that we just
mentioned. A second formulation takes the form of the following homogeneous
matrix equation: MY=M'YB. The advantage of the latter is that the extrinsic and
intrinsic camera parameters need not be made explicit. Indeed, this formulation
directly uses the 3 by 4 perspective matrices (M and M') associated with two
positions of the camera. Moreover, this formulation together with the classical
one cover a wider range of camera-based sensors to be calibrated with respect
to the robot hand. Second, we develop a common mathematical framework to solve
for the hand-eye calibration problem using either of the two formulations. We
present two methods, (i) a rotation then translation and (ii) a non-linear
solver for rotation and translation. Third, we perform a stability analysis
both for our two methods and for the classical linear method of Tsai and Lenz
(1989). In the light of this comparison, the non-linear optimization method,
that solves for rotation and translation simultaneously, seems to be the most
robust one with respect to noise and to measurement errors
Hand-Eye Calibration of Robonaut
NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates
Uncertainty-Aware Hand–Eye Calibration
We provide a generic framework for the hand–eye calibration of vision-guided industrial robots. In contrast to traditional methods, we explicitly model the uncertainty of the robot in a stochastically founded way. Albeit the repeatability of modern industrial robots is high, their absolute accuracy typically is much lower. This uncertainty—especially if not considered—deteriorates the result of the hand–eye calibration. Our proposed framework does not only result in a high accuracy of the computed hand–eye pose but also provides reliable information about the uncertainty of the robot. It further provides corrected robot poses for a convenient and inexpensive robot calibration. Our framework is computationally efficient and generic in several regards. It supports the use of a calibration target as well as self-calibration without the need for known 3-D points. It optionally enables the simultaneous calibration of the interior camera parameters. The framework is also generic with regard to the robot type and, hence, supports antropomorphic as well as selective compliance assembly robot arm (SCARA) robots, for example. Simulated and real experiments show the validity of the proposed methods. An extensive evaluation of our framework on a public dataset shows a considerably higher accuracy than 15 state-of-the-art methods
Learning to Calibrate - Estimating the Hand-eye Transformation without Calibration Objects
Hand-eye calibration is a method to determine the transformation linking between the robot and camera coordinate systems. Conventional calibration algorithms use a calibration grid to determine camera poses, corresponding to the robot poses, both of which are used in the main calibration procedure. Although such methods yield good calibration accuracy and are suitable for offline applications, they are not applicable in a dynamic environment such as robotic-assisted minimally invasive surgery (RMIS) because changes in the setup can be disruptive and time-consuming to the workflow as it requires yet another calibration procedure. In this paper, we propose a neural network-based hand-eye calibration method that does not require camera poses from a calibration grid but only uses the motion from surgical instruments in a camera frame and their corresponding robot poses as input to recover the hand-eye matrix. The advantages of using neural network are that the method is not limited by a single rigid transformation alignment and can learn dynamic changes correlated with kinematics and tool motion/interactions. Its loss function is derived from the original hand-eye transformation, the re-projection error and also the pose error in comparison to the remote centre of motion. The proposed method is validated with data from da Vinci Si and the results indicate that the designed network architecture can extract the relevant information and estimate the hand-eye matrix. Unlike the conventional hand-eye approaches, it does not require camera pose estimations which significantly simplifies the hand-eye problem in RMIS context as updating the hand-eye relationship can be done with a trained network and sequence of images. This introduces a potential of creating a hand-eye calibratio
Online estimation of the hand-eye transformation from surgical scenes
Hand-eye calibration algorithms are mature and provide accurate
transformation estimations for an effective camera-robot link but rely on a
sufficiently wide range of calibration data to avoid errors and degenerate
configurations. To solve the hand-eye problem in robotic-assisted minimally
invasive surgery and also simplify the calibration procedure by using neural
network method cooporating with the new objective function. We present a neural
network-based solution that estimates the transformation from a sequence of
images and kinematic data which significantly simplifies the calibration
procedure. The network utilises the long short-term memory architecture to
extract temporal information from the data and solve the hand-eye problem. The
objective function is derived from the linear combination of remote centre of
motion constraint, the re-projection error and its derivative to induce a small
change in the hand-eye transformation. The method is validated with the data
from da Vinci Si and the result shows that the estimated hand-eye matrix is
able to re-project the end-effector from the robot coordinate to the camera
coordinate within 10 to 20 pixels of accuracy in both testing dataset. The
calibration performance is also superior to the previous neural network-based
hand-eye method. The proposed algorithm shows that the calibration procedure
can be simplified by using deep learning techniques and the performance is
improved by the assumption of non-static hand-eye transformations.Comment: 6 pages, 4 main figure
A Graph-based Optimization Framework for Hand-Eye Calibration for Multi-Camera Setups
Hand-eye calibration is the problem of estimating the spatial transformation
between a reference frame, usually the base of a robot arm or its gripper, and
the reference frame of one or multiple cameras. Generally, this calibration is
solved as a non-linear optimization problem, what instead is rarely done is to
exploit the underlying graph structure of the problem itself. Actually, the
problem of hand-eye calibration can be seen as an instance of the Simultaneous
Localization and Mapping (SLAM) problem. Inspired by this fact, in this work we
present a pose-graph approach to the hand-eye calibration problem that extends
a recent state-of-the-art solution in two different ways: i) by formulating the
solution to eye-on-base setups with one camera; ii) by covering multi-camera
robotic setups. The proposed approach has been validated in simulation against
standard hand-eye calibration methods. Moreover, a real application is shown.
In both scenarios, the proposed approach overcomes all alternative methods. We
release with this paper an open-source implementation of our graph-based
optimization framework for multi-camera setups.Comment: This paper has been accepted for publication at the 2023 IEEE
International Conference on Robotics and Automation (ICRA
Automatic Robot Hand-Eye Calibration Enabled by Learning-Based 3D Vision
Hand-eye calibration, as a fundamental task in vision-based robotic systems,
aims to estimate the transformation matrix between the coordinate frame of the
camera and the robot flange. Most approaches to hand-eye calibration rely on
external markers or human assistance. We proposed Look at Robot Base Once
(LRBO), a novel methodology that addresses the hand-eye calibration problem
without external calibration objects or human support, but with the robot base.
Using point clouds of the robot base, a transformation matrix from the
coordinate frame of the camera to the robot base is established as I=AXB. To
this end, we exploit learning-based 3D detection and registration algorithms to
estimate the location and orientation of the robot base. The robustness and
accuracy of the method are quantified by ground-truth-based evaluation, and the
accuracy result is compared with other 3D vision-based calibration methods. To
assess the feasibility of our methodology, we carried out experiments utilizing
a low-cost structured light scanner across varying joint configurations and
groups of experiments. The proposed hand-eye calibration method achieved a
translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees
according to the experimental results. Additionally, the 3D reconstruction
experiments demonstrated a rotation error of 0.994 degrees and a position error
of 1.697 mm. Moreover, our method offers the potential to be completed in 1
second, which is the fastest compared to other 3D hand-eye calibration methods.
Code is released at github.com/leihui6/LRBO.Comment: 17 pages, 19 figures, 6 tables, submitted to MSS
A regularization-patching dual quaternion optimization method for solving the hand-eye calibration problem
The hand-eye calibration problem is an important application problem in robot
research. Based on the 2-norm of dual quaternion vectors, we propose a new dual
quaternion optimization method for the hand-eye calibration problem. The dual
quaternion optimization problem is decomposed to two quaternion optimization
subproblems. The first quaternion optimization subproblem governs the rotation
of the robot hand. It can be solved efficiently by the eigenvalue decomposition
or singular value decomposition. If the optimal value of the first quaternion
optimization subproblem is zero, then the system is rotationwise noiseless,
i.e., there exists a ``perfect'' robot hand motion which meets all the testing
poses rotationwise exactly. In this case, we apply the regularization technique
for solving the second subproblem to minimize the distance of the translation.
Otherwise we apply the patching technique to solve the second quaternion
optimization subproblem. Then solving the second quaternion optimization
subproblem turns out to be solving a quadratically constrained quadratic
program. In this way, we give a complete description for the solution set of
hand-eye calibration problems. This is new in the hand-eye calibration
literature. The numerical results are also presented to show the efficiency of
the proposed method
- …