1,934 research outputs found

    Robust hand-eye calibration of 2D laser sensors using a single-plane calibration artefact

    Get PDF
    When a vision sensor is used in conjunction with a robot, hand-eye calibration is necessary to determine the accurate position of the sensor relative to the robot. This is necessary to allow data from the vision sensor to be defined in the robot's global coordinate system. For 2D laser line sensors hand-eye calibration is a challenging process because they only collect data in two dimensions. This leads to the use of complex calibration artefacts and requires multiple measurements be collected, using a range of robot positions. This paper presents a simple and robust hand-eye calibration strategy that requires minimal user interaction and makes use of a single planar calibration artefact. A significant benefit of the strategy is that it uses a low-cost, simple and easily manufactured artefact; however, the lower complexity can lead to lower variation in calibration data. In order to achieve a robust hand-eye calibration using this artefact, the impact of robot positioning strategies is considered to maintain variation. A theoretical basis for the necessary sources of input variation is defined by a mathematical analysis of the system of equations for the calibration process. From this, a novel strategy is specified to maximize data variation by using a circular array of target scan lines to define a full set of required robot positions. A simulation approach is used to further investigate and optimise the impact of robot position on the calibration process, and the resulting optimal robot positions are then experimentally validated for a real robot mounted laser line sensor. Using the proposed optimum method, a semi-automatic calibration process, which requires only four manually scanned lines, is defined and experimentally demonstrated

    Hand-eye calibration for robotic assisted minimally invasive surgery without a calibration object

    Get PDF
    In a robot mounted camera arrangement, handeye calibration estimates the rigid relationship between the robot and camera coordinate frames. Most hand-eye calibration techniques use a calibration object to estimate the relative transformation of the camera in several views of the calibration object and link these to the forward kinematics of the robot to compute the hand-eye transformation. Such approaches achieve good accuracy for general use but for applications such as robotic assisted minimally invasive surgery, acquiring a calibration sequence multiple times during a procedure is not practical. In this paper, we present a new approach to tackle the problem by using the robotic surgical instruments as the calibration object with well known geometry from CAD models used for manufacturing. Our approach removes the requirement of a custom sterile calibration object to be used in the operating room and it simplifies the process of acquiring calibration data when the laparoscope is constrained to move around a remote centre of motion. This is the first demonstration of the feasibility to perform hand-eye calibration using components of the robotic system itself and we show promising validation results on synthetic data as well as data acquired with the da Vinci Research Kit

    Development of a multi-modal tactile force sensing system for deep-sea applications

    Get PDF
    With the increasing demand for autonomy in robotic systems, there is a rising need for sensory data sensed via different modalities. In this way system states and the aspects of unstructured environments can be assessed in the most detailed fashion possible, thus providing a basis for making decisions regarding the robotâ s task. Com- pared to other sensing modalities, the sense of touch is underrepresented in todayâ s robots. That is where this thesis comes in. A tactile sensing system is developed that combines several modalities of contact sensing. The use of the tactile sense in robotic grippers is of great relevance especially for robotic systems in the deep sea. Up to now manipulation systems in master-slave control mode have been used in this area of application. An operator performing the manipulation task has to rely on visual feedback coming from cameras. Working on the oceanâ s seafloor means having to cope with conditions of limited visibility caused by swirled-up sediment

    Hand-eye calibration method with a three-dimensional-vision sensor considering the rotation parameters of the robot pose

    Get PDF
    Hand-eye calibration is a fundamental step for a robot equipped with vision systems. However, this problem usually interacts with robot calibration because robot geometric parameters are not very precise. In this article, a new calibration method considering the rotation parameters of the robot pose is proposed. First, a constraint least square model is established assuming that each spherical center measurement of standard ball is equal in the robot base frame, which provides an initial solution. To further improve the solution accuracy, a nonlinear calibration model in the sensor frame is established. Since it can reduce one error accumulation process, a more accurate reference point can be used for optimization. Then, the rotation parameters of the robot pose whose slight errors cause large disturbance to the solution are selected by analyzing the coefficient matrices of the error items. Finally, the hand-eye transformation parameters are refined together with the rotation parameters in the nonlinear optimization solution. Some comparative simulations are performed between the modified least square method, constrained least square method, and the proposed method. The experiments are conducted on a 5-axis hybrid robot named TriMule to demonstrate the superior accuracy of the proposed method

    A regularization-patching dual quaternion optimization method for solving the hand-eye calibration problem

    Full text link
    The hand-eye calibration problem is an important application problem in robot research. Based on the 2-norm of dual quaternion vectors, we propose a new dual quaternion optimization method for the hand-eye calibration problem. The dual quaternion optimization problem is decomposed to two quaternion optimization subproblems. The first quaternion optimization subproblem governs the rotation of the robot hand. It can be solved efficiently by the eigenvalue decomposition or singular value decomposition. If the optimal value of the first quaternion optimization subproblem is zero, then the system is rotationwise noiseless, i.e., there exists a ``perfect'' robot hand motion which meets all the testing poses rotationwise exactly. In this case, we apply the regularization technique for solving the second subproblem to minimize the distance of the translation. Otherwise we apply the patching technique to solve the second quaternion optimization subproblem. Then solving the second quaternion optimization subproblem turns out to be solving a quadratically constrained quadratic program. In this way, we give a complete description for the solution set of hand-eye calibration problems. This is new in the hand-eye calibration literature. The numerical results are also presented to show the efficiency of the proposed method

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Automatic Robot Hand-Eye Calibration Enabled by Learning-Based 3D Vision

    Full text link
    Hand-eye calibration, as a fundamental task in vision-based robotic systems, aims to estimate the transformation matrix between the coordinate frame of the camera and the robot flange. Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed Look at Robot Base Once (LRBO), a novel methodology that addresses the hand-eye calibration problem without external calibration objects or human support, but with the robot base. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as I=AXB. To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. Code is released at github.com/leihui6/LRBO.Comment: 17 pages, 19 figures, 6 tables, submitted to MSS

    Sensor based real-time control of robots

    Get PDF
    corecore