7 research outputs found

    Hand-eye calibration for rigid laparoscopes using an invariant point

    Get PDF
    PURPOSE: Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. METHODS: In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. RESULTS: We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. CONCLUSION: We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy

    On Pattern Selection for Laparoscope Calibration

    Get PDF
    Camera calibration is a key requirement for augmented reality in surgery. Calibration of laparoscopes provides two challenges that are not sufficiently addressed in the literature. In the case of stereo laparoscopes the small distance (less than 5mm) between the channels means that the calibration pattern is an order of magnitude more distant than the stereo separation. For laparoscopes in general, if an external tracking system is used, hand-eye calibration is difficult due to the long length of the laparoscope. Laparoscope intrinsic, stereo and hand-eye calibration all rely on accurate feature point selection and accurate estimation of the camera pose with respect to a calibration pattern. We compare 3 calibration patterns, chessboard, rings, and AprilTags. We measure the error in estimating the camera intrinsic parameters and the camera poses. Accuracy of camera pose estimation will determine the accuracy with which subsequent stereo or hand-eye calibration can be done. We compare the results of repeated real calibrations and simulations using idealised noise, to determine the expected accuracy of different methods and the sources of error. The results do indicate that feature detection based on rings is more accurate than a chessboard, however this doesn’t necessarily lead to a better calibration. Using a grid with identifiable tags enables detection of features nearer the image boundary, which may improve calibration

    Calibration and External Force Sensing for Soft Robots using an RGB-D Camera

    Get PDF
    International audienceBenefiting from the deformability of soft robots, calibration and force sensing for soft robots are possible using an external vision-based system, instead of embedded mechatronic force sensors. In this paper, we first propose a calibration method to calibrate both the sensor-robot coordinate system and the actuator inputs. This task is addressed through a sequential optimization problem for both variables. We also introduce an external force sensing system based on a real-time Finite Element (FE) model with the assumption of static configurations, and which consists of two steps: force location detection and force intensity computation. The algorithm that estimates force location relies on the segmentation of the point cloud acquired by an RGB-D camera. Then, the force intensities can be computed by solving an inverse quasi-static problem based on matching the FE model with the point cloud of the soft robot. As for validation, the proposed strategies for calibration and force sensing have been tested using a parallel soft robot driven by four cables

    Calibration of spatial relationships between multiple robots and sensors

    Get PDF
    Classic hand-eye calibration methods have been limited to single robots and sensors. Recently a new calibration formulation for multiple robots has been proposed that solves for the extrinsic calibration parameters for each robot simultaneously instead of sequentially. The existing solutions for this new problem required data to have correspondence, but Ma, Goh and Chirikjian (MGC) proposed a probabilistic method to solve this problem which eliminated the need for correspondence. In this thesis, the literature of the various robot-sensor calibration problems and solutions are surveyed, and the MGC method is reviewed in detail. Lastly comparison with other methods using numerical simulations were carried out to draw some conclusions
    corecore