6 research outputs found

    i3PosNet: Instrument Pose Estimation from X-Ray in temporal bone surgery

    Full text link
    Purpose: Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image. Methods: i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations. Results: We show i3PosNet reaches errors less than 0.05mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real x-rays without any further adaptation. Conclusion: The translation of Deep Learning based methods to surgical applications is difficult, because large representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based on synthetic training data.Comment: Accepted at International journal of computer assisted radiology and surgery pending publicatio

    Accurate 3D-reconstruction and -navigation for high-precision minimal-invasive interventions

    Get PDF
    The current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data

    IMAGE ANALYSIS FOR SPINE SURGERY: DATA-DRIVEN DETECTION OF SPINE INSTRUMENTATION & AUTOMATIC ANALYSIS OF GLOBAL SPINAL ALIGNMENT

    Get PDF
    Spine surgery is a therapeutic modality for treatment of spine disorders, including spinal deformity, degeneration, and trauma. Such procedures benefit from accurate localization of surgical targets, precise delivery of instrumentation, and reliable validation of surgical objectives – for example, confirming that the surgical implants are delivered as planned and desired changes to the global spinal alignment (GSA) are achieved. Recent advances in surgical navigation have helped to improve the accuracy and precision of spine surgery, including intraoperative imaging integrated with real-time tracking and surgical robotics. This thesis aims to develop two methods for improved image-guided surgery using image analytic techniques. The first provides a means for automatic detection of pedicle screws in intraoperative radiographs – for example, to streamline intraoperative assessment of implant placement. The algorithm achieves a precision and recall of 0.89 and 0.91, respectively, with localization accuracy within ~10 mm. The second develops two algorithms for automatic assessment of GSA in computed tomography (CT) or cone-beam CT (CBCT) images, providing a means to quantify changes in spinal curvature and reduce the variability in GSA measurement associated with manual methods. The algorithms demonstrate GSA estimates with 93.8% of measurements within a 95% confidence interval of manually defined truth. Such methods support the goals of safe, effective spine surgery and provide a means for more quantitative intraoperative quality assurance. In turn, the ability to quantitatively assess instrument placement and changes in GSA could represent important elements of retrospective analysis of large image datasets, improved clinical decision support, and improved patient outcomes

    Navigation with Local Sensors in Surgical Robotics

    Get PDF

    Instrument Pose Estimation Using Registration for Otobasis Surgery

    No full text
    Clinical outcome of several Minimally Invasive Surgeries (MIS) heavily depend on the accuracy of intraoperative pose estimation of the surgical instrument from intraoperative x-rays. The estimation consists of finding the tool in a given set of x-rays and extracting the necessary data to recreate the tool’s pose for further navigation - resulting in severe consequences of incorrect estimation. Though state-of-the-art MIS literature has exploited image registration as a tool for instrument pose estimation, lack of practical considerations in previous study design render their conclusion ineffective from a clinical standpoint. One major issue of such a study is the lack of Ground Truth in clinical data -as there are no direct ways of measuring the ground truth pose and indirect estimation accumulates error. A systematic way to overcome this problem is to generate Digitally Reconstructed Radiographs (DRR), however, such procedure generates data which are free from measuring errors (e.g. noise, number of projections), resulting claims of registration performance inconclusive. Generalization of registration performance across different instruments with different Degrees of Freedom (DoF) has not been studied as well. By marrying a rigorous study design involving several clinical scenarios with, for example, several optimizers, metrics and others parameters for image registration, this paper bridges this gap effectively. Although the pose estimation error scales inversely with instrument size, we show image registration generalizes well for different instruments and DoF. In particular, it is shown that increasing the number of x-ray projections can reduce the pose estimation error significantly across instruments - which might lead to the acquisition of several x-rays for pose estimation in a clinical workflow
    corecore