5 research outputs found

    Calibration-Free Robot-Sensor Calibration approach based on Second-Order Cone Programming

    No full text
    In order to overcome the restrictions of traditional robot-sensor calibration method which solve the tool-camera transformation and robot-world transformation rely on calibration target, a calibration-free approach that solve the robot-sensor calibration problem of the form AX = YB based on Second-Order Cone Programming is proposed. First, a Structure-From-Motion approach was used to recover the camera motion matrix up to scaling. Then rotation and translation matrix in calibration equation were parameterized by dual quaternion theory. Finally, the Second-Order Cone Programming method was used to simultaneously solve the optimal solution of camera motion matrix scale factor, the robot-world and hand-eye relation. The experimental results indicate that the calibration precision of rotation relative error is 3.998% and the translation relative error is 0.117% in defect of calibration target as 3D benchmark. Compared with similar methods, the proposed method can effectively improve the calibration accuracy of the robot-world and hand-eye relation, and extend the application field of robot-sensor calibration method

    Simultaneous Robot–World and Hand–Eye Calibration without a Calibration Object

    No full text
    An extended robot⁻world and hand⁻eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot⁻world and hand⁻eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm
    corecore