3,251 research outputs found
Cameras and Inertial/Magnetic Sensor Units Alignment Calibration
Due to the external acceleration interference/ magnetic disturbance, the inertial/magnetic measurements are usually fused with visual data for drift-free orientation estimation, which plays an important role in a wide variety of applications, ranging from virtual reality, robot, and computer vision to biomotion analysis and navigation. However, in order to perform data fusion, alignment calibration must be performed in advance to determine the difference between the sensor coordinate system and the camera coordinate system. Since orientation estimation performance of the inertial/magnetic sensor unit is immune to the selection of the inertial/magnetic sensor frame original point, we therefore ignore the translational difference by assuming the sensor and camera coordinate systems sharing the same original point and focus on the rotational alignment difference only in this paper. By exploiting the intrinsic restrictions among the coordinate transformations, the rotational alignment calibration problem is formulated by a simplified handâeye equation AX = XB (A, X, and B are all rotation matrices). A two-step iterative algorithm is then proposed to solve such simplified handeye calibration task. Detailed laboratory validation has been performed and the good experimental results have illustrated the effectiveness of the proposed alignment calibration method
Uncertainty-Aware HandâEye Calibration
We provide a generic framework for the handâeye calibration of vision-guided industrial robots. In contrast to traditional methods, we explicitly model the uncertainty of the robot in a stochastically founded way. Albeit the repeatability of modern industrial robots is high, their absolute accuracy typically is much lower. This uncertaintyâespecially if not consideredâdeteriorates the result of the handâeye calibration. Our proposed framework does not only result in a high accuracy of the computed handâeye pose but also provides reliable information about the uncertainty of the robot. It further provides corrected robot poses for a convenient and inexpensive robot calibration. Our framework is computationally efficient and generic in several regards. It supports the use of a calibration target as well as self-calibration without the need for known 3-D points. It optionally enables the simultaneous calibration of the interior camera parameters. The framework is also generic with regard to the robot type and, hence, supports antropomorphic as well as selective compliance assembly robot arm (SCARA) robots, for example. Simulated and real experiments show the validity of the proposed methods. An extensive evaluation of our framework on a public dataset shows a considerably higher accuracy than 15 state-of-the-art methods
A computationally efficient method for handâeye calibration
Purpose: Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate handâeye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online handâeye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient handâeye calibration methods. Methods: We present a computationally efficient iterative method for handâeye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. Results: The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method
Extrinsic Infrastructure Calibration Using the Hand-Eye Robot-World Formulation
We propose a certifiably globally optimal approach for solving the hand-eye
robot-world problem supporting multiple sensors and targets at once. Further,
we leverage this formulation for estimating a geo-referenced calibration of
infrastructure sensors. Since vehicle motion recorded by infrastructure sensors
is mostly planar, obtaining a unique solution for the respective hand-eye
robot-world problem is unfeasible without incorporating additional knowledge.
Hence, we extend our proposed method to include a-priori knowledge, i.e., the
translation norm of calibration targets, to yield a unique solution. Our
approach achieves state-of-the-art results on simulated and real-world data.
Especially on real-world intersection data, our approach utilizing the
translation norm is the only method providing accurate results.Comment: Accepted at 2023 IEEE Intelligent Vehicles Symposiu
A regularization-patching dual quaternion optimization method for solving the hand-eye calibration problem
The hand-eye calibration problem is an important application problem in robot
research. Based on the 2-norm of dual quaternion vectors, we propose a new dual
quaternion optimization method for the hand-eye calibration problem. The dual
quaternion optimization problem is decomposed to two quaternion optimization
subproblems. The first quaternion optimization subproblem governs the rotation
of the robot hand. It can be solved efficiently by the eigenvalue decomposition
or singular value decomposition. If the optimal value of the first quaternion
optimization subproblem is zero, then the system is rotationwise noiseless,
i.e., there exists a ``perfect'' robot hand motion which meets all the testing
poses rotationwise exactly. In this case, we apply the regularization technique
for solving the second subproblem to minimize the distance of the translation.
Otherwise we apply the patching technique to solve the second quaternion
optimization subproblem. Then solving the second quaternion optimization
subproblem turns out to be solving a quadratically constrained quadratic
program. In this way, we give a complete description for the solution set of
hand-eye calibration problems. This is new in the hand-eye calibration
literature. The numerical results are also presented to show the efficiency of
the proposed method
Calibration by correlation using metric embedding from non-metric similarities
This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just
by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time
correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then
the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to
formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on
the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional
scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?)
and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature)
as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case,
on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric
measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric
information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional),
and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm
performs as theoretically predicted for all corner cases of the observability analysis
- âŠ