38 research outputs found

    Semi-Automatic Infrared Calibration for Augmented Reality Systems in Surgery

    Full text link
    Augmented reality (AR) has the potential to improve the immersion and efficiency of computer-assisted orthopaedic surgery (CAOS) by allowing surgeons to maintain focus on the operating site rather than external displays in the operating theatre. Successful deployment of AR to CAOS requires a calibration that can accurately calculate the spatial relationship between real and holographic objects. Several studies attempt this calibration through manual alignment or with additional fiducial markers in the surgical scene. We propose a calibration system that offers a direct method for the calibration of AR head-mounted displays (HMDs) with CAOS systems, by using infrared-reflective marker-arrays widely used in CAOS. In our fast, user-agnostic setup, a HoloLens 2 detected the pose of marker arrays using infrared response and time-of-flight depth obtained through sensors onboard the HMD. Registration with a commercially available CAOS system was achieved when an IR marker-array was visible to both devices. Study tests found relative-tracking mean errors of 2.03 mm and 1.12{\deg} when calculating the relative pose between two static marker-arrays at short ranges. When using the calibration result to provide in-situ holographic guidance for a simulated wire-insertion task, a pre-clinical test reported mean errors of 2.07 mm and 1.54{\deg} when compared to a pre-planned trajectory.Comment: Published in conference proceedings for 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). For associated code visit: https://github.com/HL2-DIN

    Caveats on the first-generation da Vinci Research Kit: latent technical constraints and essential calibrations

    Full text link
    Telesurgical robotic systems provide a well established form of assistance in the operating theater, with evidence of growing uptake in recent years. Until now, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, California) has been the most widely adopted robot of this kind, with more than 6,700 systems in current clinical use worldwide [1]. To accelerate research on robotic-assisted surgery, the retired first-generation da Vinci robots have been redeployed for research use as "da Vinci Research Kits" (dVRKs), which have been distributed to research institutions around the world to support both training and research in the sector. In the past ten years, a great amount of research on the dVRK has been carried out across a vast range of research topics. During this extensive and distributed process, common technical issues have been identified that are buried deep within the dVRK research and development architecture, and were found to be common among dVRK user feedback, regardless of the breadth and disparity of research directions identified. This paper gathers and analyzes the most significant of these, with a focus on the technical constraints of the first-generation dVRK, which both existing and prospective users should be aware of before embarking onto dVRK-related research. The hope is that this review will aid users in identifying and addressing common limitations of the systems promptly, thus helping to accelerate progress in the field.Comment: 15 pages, 7 figure

    AiAReSeg: Catheter Detection and Segmentation in Interventional Ultrasound using Transformers

    Full text link
    To date, endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature. Prolonged Fluoroscopic exposure is harmful for the patient and the clinician, and may lead to severe post-operative sequlae such as the development of cancer. Meanwhile, the use of interventional Ultrasound has gained popularity, due to its well-known benefits of small spatial footprint, fast data acquisition, and higher tissue contrast images. However, ultrasound images are hard to interpret, and it is difficult to localise vessels, catheters, and guidewires within them. This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences. The network architecture was inspired by the Attention in Attention mechanism, temporal tracking networks, and introduced a novel 3D segmentation head that performs 3D deconvolution across time. In order to facilitate training of such deep learning networks, we introduce a new data synthesis pipeline that used physics-based catheter insertion simulations, along with a convolutional ray-casting ultrasound simulator to produce synthetic ultrasound images of endovascular interventions. The proposed method is validated on a hold-out validation dataset, thus demonstrated robustness to ultrasound noise and a wide range of scanning angles. It was also tested on data collected from silicon-based aorta phantoms, thus demonstrated its potential for translation from sim-to-real. This work represents a significant step towards safer and more efficient endovascular surgery using interventional ultrasound.Comment: This work has been submitted to the IEEE for possible publicatio

    Occlusion-Robust Visual Markerless Bone Tracking for Computer-Assisted Orthopedic Surgery

    Get PDF

    Pose Measurement of Flexible Medical Instruments Using Fiber Bragg Gratings in Multi-Core Fiber

    Get PDF
    Accurate navigation of flexible medical instruments like catheters require the knowledge of its pose, that is its position and orientation. In this paper multi-core fibers inscribed with fiber Bragg gratings (FBG) are utilized as sensors to measure the pose of a multi-segment catheter. A reconstruction technique that provides the pose of such a fiber is presented. First, the measurement from the Bragg gratings are converted to strain then the curvature is deduced based on those strain calculations. Next, the curvature and the Bishop frame equations are used to reconstruct the fiber. This technique is validated through experiments where the mean error in position and orientation is observed to be less than 4.69 mm and 6.48 degrees, respectively. The main contributions of the paper are the use of Bishop frames in the reconstruction and the experimental validation of the acquired pose

    SimPS-Net: Simultaneous Pose & Segmentation Network of Surgical Tools

    Get PDF
    The ability to detect and localise surgical tools using RGB cameras during robotic assisted surgery can allow for the development of various implementations, such as vision- based active constraints and refinements in robot path planning, which can ultimately lead in improved patient safety during operation. For this purpose, the proposed network, SimPS-Net capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera, is introduced. In addition to the network, a novel dataset generated for training and testing is presented. The proposed network achieved a mean DICE coefficient of 85.0%, while also exhibiting a low average error of 5.5mm and 3.3◦ for 3D position and orientation respectively, thus outperforming the competing networks.</jats:p

    Automatic optimized 3D path planner for steerable catheters with heuristic search and uncertainty tolerance

    Get PDF
    In this paper, an automatic planner for minimally invasive neurosurgery is presented. The solution can provide the surgeon with the best path to connect a user-defined entry point with a target in accordance with specific optimality criteria guaranteeing the clearance from obstacles which can be found along the insertion pathway. The method is integrated onto the EDEN2020∗ programmable bevel-tip needle, a multi-segment steerable probe intended to be used to perform drug delivery for glioblastomas treatment. A sample-based heuristic search inspired to the BIT* algorithm is used to define the optimal solution in terms of path length, followed by a smoothing phase required to meet the kinematic constraint of the catheter. To account for inaccuracies in catheter modeling, which could de- termine unexpected control errors over the insertion procedure, an uncertainty margin is defined so that to include a further level of safety for the planning algorithm. The feasibility of the proposed solution was demonstrated by testing the method in simulated neurosurgical scenarios with different degree of obstacles occupancy and against other sample-based algorithms present in literature: RRT, RRT* and an enhanced version of the RRT-Connect

    SimPS-Net: Simultaneous Pose &amp; Segmentation Network of Surgical Tools

    Get PDF
    Localisation of surgical tools during operation is of paramount importance in the context of robotic assisted surgery. 3D pose estimation can be utilised to explore the interaction of tools with registered tissue and improve the motion planning of robotic platforms, thus avoiding potential collisions with external agents. With the problems of traditional tracking systems being cost and the need to redesign surgical tools to accommodate markers, there has been a shift towards image-based, markerless tracking techniques. This study introduces a network capable of detecting and localising tools in 3D using a monocular setup. For training and validation, a novel dataset, 3dStool, was produced, and the network was trained to obtain a mean Dice coefficient of 85.0% for detection, along with a mean position and orientation error of 5.5mm and 3.3. respectively. The presented method is significantly more versatile than various state of the art solutions, as it requires no prior knowledge regarding the 3D structure of the tracked tools. The results were compared to standard pose estimation networks using the same dataset and demonstrated lower errors along most metrics. In addition, the generalisation capabilities of the proposed network were explored by performing inference on a previously unseen pair of scissors

    Real-time active constraint generation and enforcement for surgical tools using 3D detection and localisation network

    Get PDF
    Introduction: Collaborative robots, designed to work alongside humans for manipulating end-effectors, greatly benefit from the implementation of active constraints. This process comprises the definition of a boundary, followed by the enforcement of some control algorithm when the robot tooltip interacts with the generated boundary. Contact with the constraint boundary is communicated to the human operator through various potential forms of feedback. In fields like surgical robotics, where patient safety is paramount, implementing active constraints can prevent the robot from interacting with portions of the patient anatomy that shouldn’t be operated on. Despite improvements in orthopaedic surgical robots, however, there exists a gap between bulky systems with haptic feedback capabilities and miniaturised systems that only allow for boundary control, where interaction with the active constraint boundary interrupts robot functions. Generally, active constraint generation relies on optical tracking systems and preoperative imaging techniques.Methods: This paper presents a refined version of the Signature Robot, a three degrees-of-freedom, hands-on collaborative system for orthopaedic surgery. Additionally, it presents a method for generating and enforcing active constraints “on-the-fly” using our previously introduced monocular, RGB, camera-based network, SimPS-Net. The network was deployed in real-time for the purpose of boundary definition. This boundary was subsequently used for constraint enforcement testing. The robot was utilised to test two different active constraints: a safe region and a restricted region.Results: The network success rate, defined as the ratio of correct over total object localisation results, was calculated to be 54.7% ± 5.2%. In the safe region case, haptic feedback resisted tooltip manipulation beyond the active constraint boundary, with a mean distance from the boundary of 2.70 mm ± 0.37 mm and a mean exit duration of 0.76 s ± 0.11 s. For the restricted-zone constraint, the operator was successfully prevented from penetrating the boundary in 100% of attempts.Discussion: This paper showcases the viability of the proposed robotic platform and presents promising results of a versatile constraint generation and enforcement pipeline.</jats:p
    corecore