42 research outputs found

    A Study of Image-based C-arm Tracking Using Minimal Fiducials

    Get PDF
    Image-based tracking of the c-arm continues to be a critical and challenging problem for many clinical applications due to its widespread use in many computer-assisted procedures that rely upon its accuracy for further planning, registration, and reconstruction tasks. In this thesis, a variety of approaches are presented to improve current c-arm tracking methods and devices for intra-operative procedures. The first approach presents a novel two-dimensional fiducial comprising a set of coplanar conics and an improved single-image pose estimation algorithm that addresses segmentation errors using a mathematical equilibration approach. Simulation results show an improvement in the mean rotation and translation errors by factors of 4 and 1.75, respectively, as a result of using the proposed algorithm. Experiments using real data obtained by imaging a simple precisely machined model consisting of three coplanar ellipses retrieve pose estimates that are in good agreement with those obtained by a ground truth optical tracker. This two-dimensional fiducial can be easily placed under the patient allowing a wide field of view for the motion of the c-arm. The second approach employs learning-based techniques to two-view geometrical theories. A demonstrative algorithm is used to simultaneously tackle matching and segmentation issues of features segmented from pairs of acquired images. The corrected features can then be used to retrieve the epipolar geometry which can ultimately provide pose parameters using a one-dimensional fiducial. The problem of match refinement for epipolar geometry estimation is formulated in a reinforcement-learning framework. Experiments demonstrate the ability to both reject false matches and fix small localization errors in the segmentation of true noisy matches in a minimal number of steps. The third approach presents a feasibility study for an approach that entirely eliminates the use of tracking fiducials. It relies only on preoperative data to initialize a point-based model that is subsequently used to iteratively estimate the pose and the structure of the point-like intraoperative implant using three to six images simultaneously. This method is tested in the framework of prostate brachytherapy in which preoperative data including planned 3-D locations for a large number of point-like implants called seeds is usually available. Simultaneous pose estimation for the c-arm for each image and localization of the seeds is studied in a simulation environment. Results indicate mean reconstruction errors that are less than 1.2 mm for noisy plans of 84 seeds or fewer. These are attained when the 3D mean error introduced to the plan as a result of adding Gaussian noise is less than 3.2 mm

    Interlandmark measurements from lodox statscan images with application to femoral neck anteversion assessment

    Get PDF
    Includes abstract.Includes bibliographical references.Clinicians often take measurements between anatomical landmarks on X-ray radiographs for diagnosis and treatment planning, for example in orthopaedics and orthodontics. X-ray images, however, overlap three-dimensional internal structures onto a two-dimensional plane during image formation. Depth information is therefore lost and measurements do not truly reflect spatial relationships. The main aim of this study was to develop an inter-landmark measurement tool for the Lodox Statscan digital radiography system. X-ray stereophotogrammetry was applied to Statscan images to enable three-dimensional point localization for inter-landmark measurement using two-dimensional radiographs. This technique requires images of the anatomical region of interest to be acquired from different perspectives as well as a suitable calibration tool to map image coordinates to real world coordinates. The Statscan is suited to the technique because it is capable of axial rotations for multiview imaging. Three-dimensional coordinate reconstruction and inter-landmark measurements were taken using a planar object and a dry pelvis specimen in order to assess the intra-observer measurement accuracy, reliability and precision. The system yielded average (X, Y, Z) coordinate reconstruction accuracy of (0.08 0.12 0.34) mm and resultant coordinate reconstruction accuracy within 0.4mm (range 0.3mm – 0.6mm). Inter-landmark measurements within 2mm for lengths and 1.80 for angles were obtained, with average accuracies of 0.4mm (range 0.0mm – 2.0 mm) and 0.30 (range 0.0 – 1.8)0 respectively. The results also showed excellent overall precision of (0.5mm, 0.10) and were highly reliable when all landmarks were completely visible in both images. Femoral neck anteversion measurement on Statscan images was also explored using 30 dry right adult femurs. This was done in order to assess the feasibility of the algorithm for a clinical application. For this investigation, four methods were tested to determine the optimal landmarks for measurement and the measurement process involved calculation of virtual landmarks. The method that yielded the best results produced all measurements within 10 of reference values and the measurements were highly reliable with very good precision within 0.10. The average accuracy was within 0.40 (range 0.10 –0.80).In conclusion, X-ray stereophotogrammetry enables accurate, reliable and precise inter-landmark measurements for the Lodox Statscan X-ray imaging system. The machine may therefore be used as an inter-landmark measurement tool for routine clinical applications

    Towards Image-Guided Pediatric Atrial Septal Defect Repair

    Get PDF
    Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10\% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). This software system serves as the core technology by which a system for reduction of radiation exposure to pediatric patients was developed. The bulk of the initial work in this research endevaour was the development of KNIFE which itself went through countless iterations before arriving at its current state as per the feature requirements established. Secondly, since this work involved the use of captured medical images and their use in an IGS software suite, a brief analysis of the physics behind the images was conducted. Through this aspect of the work, intrinsic parameters (principal point and focal point) of the fluoroscope were quantified using a 3D grid calibration phantom. A second grid phantom was traversed through the fluoroscopic imaging volume of II and flat panel based systems at 2 cm intervals building a scatter field of the volume to demonstrate pincushion and \u27S\u27 distortion in the images. Effects of projection distortion on the images was assessed by measuring the fiducial registration error (FRE) of each point used in two different registration techniques, where both methods utilized ordinary procrustes analysis but the second used a projection matrix built from the fluoroscopes calculated intrinsic parameters. A case study was performed to test whether the projection registration outperforms the rigid transform only. Using the knowledge generated were able to successfully design and complete mock clinical procedures using cardiac phantom models. These mock trials at the beginning of this work used a single point to represent catheter location but this was eventually replaced with a full shape model that offered numerous advantages. At the conclusion of this work a novel protocol for conducting IG ASD procedures was developed. Future work would involve the construction of novel EM tracked tools, phantom models for other vascular diseases and finally clinical integration and use

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion

    Full text link
    The instrumentation of spinal fusion surgeries includes pedicle screw placement and rod implantation. While several surgical navigation approaches have been proposed for pedicle screw placement, less attention has been devoted towards the guidance of patient-specific adaptation of the rod implant. We propose a marker-free and intuitive Augmented Reality (AR) approach to navigate the bending process required for rod implantation. A stereo neural network is trained from the stereo video streams of the Microsoft HoloLens in an end-to-end fashion to determine the location of corresponding pedicle screw heads. From the digitized screw head positions, the optimal rod shape is calculated, translated into a set of bending parameters, and used for guiding the surgeon with a novel navigation approach. In the AR-based navigation, the surgeon is guided step-by-step in the use of the surgical tools to achieve an optimal result. We have evaluated the performance of our method on human cadavers against two benchmark methods, namely conventional freehand bending and marker-based bending navigation in terms of bending time and rebending maneuvers. We achieved an average bending time of 231s with 0.6 rebending maneuvers per rod compared to 476s (3.5 rebendings) and 348s (1.1 rebendings) obtained by our freehand and marker-based benchmarks, respectively
    corecore