7 research outputs found

    Fluoroscopic Navigation for Robot-Assisted Orthopedic Surgery

    Get PDF
    Robot-assisted orthopedic surgery has gained increasing attention due to its improved accuracy and stability in minimally-invasive interventions compared to a surgeon's manual operation. An effective navigation system is critical, which estimates the intra-operative tool-to-tissue pose relationship to guide the robotic surgical device. However, most existing navigation systems use fiducial markers, such as bone pin markers, to close the calibration loop, which requires a clear line of sight and is not ideal for patients. This dissertation presents fiducial-free, fluoroscopic image-based navigation pipelines for three robot-assisted orthopedic applications: femoroplasty, core decompression of the hip, and transforaminal lumbar epidural injections. We propose custom-designed image intensity-based 2D/3D registration algorithms for pose estimation of bone anatomies, including femur and spine, and pose estimation of a rigid surgical tool and a flexible continuum manipulator. We performed system calibration and integration into a surgical robotic platform. We validated the navigation system's performance in comprehensive simulation and ex vivo cadaveric experiments. Our results suggest the feasibility of applying our proposed navigation methods for robot-assisted orthopedic applications. We also investigated machine learning approaches that can benefit the medical imaging analysis, automate the navigation component or address the registration challenges. We present a synthetic X-ray data generation pipeline called SyntheX, which enables large-scale machine learning model training. SyntheX was used to train feature detection tasks of the pelvis anatomy and the continuum manipulator, which were used to initialize the registration pipelines. Last but not least, we propose a projective spatial transformer module that learns a convex shape similarity function and extends the registration capture range. We believe that our image-based navigation solutions can benefit and inspire related orthopedic robot-assisted system designs and eventually be used in the operating rooms to improve patient outcomes

    A Multi-physics Planning Paradigm for Robot Assisted Orthopaedic Surgery

    Get PDF
    Osteoporosis or severe reduction in bone mineral density is a disease that primarily affects elderly people. Osteoporotic hip fracture rates increase exponentially with age in both men and women. In addition to the high mortality rate for those sustaining such fractures, less than half of the survivors return to their pre-fracture status concerning the quality of daily living. Augmentation of the proximal femur with Polymethylmethacrylate (PMMA) bone cement (femoroplasty) has been identified as a potential preventive approach to reduce the risk of fracture. Femoroplasty, however, is associated with a risk of thermal damage as well as the leakage of cement or blockage of blood supply when large volumes of PMMA are introduced inside the bone. Several recent studies have proposed injection strategies to reduce the injection volume in simulations. This thesis describes the methods and tools developed for multi-physics planning and the execution of femoroplasty. To this end, computational models are developed to simulate how bone augmentation affects the biomechanical properties of the bone. These models are used to plan femoropasty for cadaveric specimens and showed the superiority of planned-based augmentation over generic injection strategies. Experimental tests confirmed the findings of simulations and showed a significant increase in fracture-related biomechanical properties of the augmented compared to those left intact. In addition to biomechanical studies for femoroplasty, a heat-transfer model was developed to estimate bone temperatures during augmentation. Furthermore, a curved injection strategy was introduced and validated in simulations. These developments and modeling capabilities can be extended to various augmentation surgeries including vertebroplasty and core decompression

    Robustness and Accuracy of Feature-Based Single Image 2-D–3-D Registration Without Correspondences for Image-Guided Intervention

    Get PDF
    published_or_final_versio

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY

    Get PDF
    Robot-assisted surgery offers improved accuracy, precision, safety, and workflow for a variety of surgical procedures spanning different surgical contexts (e.g., neurosurgery, pulmonary interventions, orthopaedics). These systems can assist with implant placement, drilling, bone resection, and biopsy while reducing human errors (e.g., hand tremors and limited dexterity) and easing the workflow of such tasks. Furthermore, such systems can reduce radiation dose to the clinician in fluoroscopically-guided procedures since many robots can perform their task in the imaging field-of-view (FOV) without the surgeon. Robot-assisted surgery requires (1) a preoperative plan defined relative to the patient that instructs the robot to perform a task, (2) intraoperative registration of the patient to transform the planning data into the intraoperative space, and (3) intraoperative registration of the robot to the patient to guide the robot to execute the plan. However, despite the operational improvements achieved using robot-assisted surgery, there are geometric inaccuracies and significant challenges to workflow associated with (1-3) that impact widespread adoption. This thesis aims to address these challenges by using image registration to plan and guide robot- assisted surgical (RAS) systems to encourage greater adoption of robotic-assistance across surgical contexts (in this work, spinal neurosurgery, pulmonary interventions, and orthopaedic trauma). The proposed methods will also be compatible with diverse imaging and robotic platforms (including low-cost systems) to improve the accessibility of RAS systems for a wide range of hospital and use settings. This dissertation advances important components of image-guided, robot-assisted surgery, including: (1) automatic target planning using statistical models and surgeon-specific atlases for application in spinal neurosurgery; (2) intraoperative registration and guidance of a robot to the planning data using 3D-2D image registration (i.e., an “image-guided robot”) for assisting pelvic orthopaedic trauma; (3) advanced methods for intraoperative registration of planning data in deformable anatomy for guiding pulmonary interventions; and (4) extension of image-guided robotics in a piecewise rigid, multi-body context in which the robot directly manipulates anatomy for assisting ankle orthopaedic trauma

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Personalized Hip and Knee Joint Replacement

    Get PDF
    This open access book describes and illustrates the surgical techniques, implants, and technologies used for the purpose of personalized implantation of hip and knee components. This new and flourishing treatment philosophy offers important benefits over conventional systematic techniques, including component positioning appropriate to individual anatomy, improved surgical reproducibility and prosthetic performance, and a reduction in complications. The techniques described in the book aim to reproduce patients’ native anatomy and physiological joint laxity, thereby improving the prosthetic hip/knee kinematics and functional outcomes in the quest of the forgotten joint. They include kinematically aligned total knee/total hip arthroplasty, partial knee replacement, and hip resurfacing. The relevance of available and emerging technological tools for these personalized approaches is also explained, with coverage of, for example, robotics, computer-assisted surgery, and augmented reality. Contributions from surgeons who are considered world leaders in diverse fields of this novel surgical philosophy make this open access book will invaluable to a wide readership, from trainees at all levels to consultants practicing lower limb surger
    corecore