5 research outputs found

    Hand-eye camera calibration with an optical tracking system

    No full text
    \u3cp\u3eThis paper presents a method for hand-eye camera calibration via an optical tracking system (OTS) faciltating robotic applications. The camera pose cannot be directly tracked via the OTS. Because of this, a transformation matrix between a marker-plate pose, tracked via the OTS, and the camera pose needs to be estimated. To this end, we evaluate two different approaches for hand-eye calibration. In the first approach, the camera is in a fixed position and a 2D calibration plate is displaced. In the second approach, the camera is also fixed, but now a 3D calibration object is moved. The first step of our method consists of collecting N views of the marker-plate pose and the calibration plates, acquired via OTS. This is achieved by keeping the camera fixed and moving the calibration plate, while taking a picture of the calibration plate using the camera. A dataset is constructed that contains marker-plate poses and the relative camera poses. Afterwards, the transformation matrix is then computed, following a least-squares minimization. Accuracy in hand-eye calibration is computed in terms of re-projection error, calculated based on camera homography transformations. For both approaches, we measure the changes in accuracy as a function of the number of poses used for each calibration, while we define the minimum number of poses required to obtain a good camera calibration. Results of the experiments show similar performances for the two evaluated methods, achieving a median value of the re-projection error at N = 25 poses of 0.76 mm for the 2D calibration plate and 0.70 mm for the 3D calibration object. Also, we have found that minimally 15 poses are required to achieve a good camera calibration.\u3c/p\u3

    1 A System for Video-based Navigation for Endoscopic Endonasal Skull Base Surgery

    No full text
    Abstract—Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today’s navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce three-dimensional (3D) points, and then registers the reconstructed point cloud to a surface segmented from pre-operative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the two-dimensional (2D)-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves sub-millimeter (0.70 mm mean) target registration error (TRE) results

    Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    No full text
    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”)
    corecore