2 research outputs found

    ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY

    Get PDF
    Robot-assisted surgery offers improved accuracy, precision, safety, and workflow for a variety of surgical procedures spanning different surgical contexts (e.g., neurosurgery, pulmonary interventions, orthopaedics). These systems can assist with implant placement, drilling, bone resection, and biopsy while reducing human errors (e.g., hand tremors and limited dexterity) and easing the workflow of such tasks. Furthermore, such systems can reduce radiation dose to the clinician in fluoroscopically-guided procedures since many robots can perform their task in the imaging field-of-view (FOV) without the surgeon. Robot-assisted surgery requires (1) a preoperative plan defined relative to the patient that instructs the robot to perform a task, (2) intraoperative registration of the patient to transform the planning data into the intraoperative space, and (3) intraoperative registration of the robot to the patient to guide the robot to execute the plan. However, despite the operational improvements achieved using robot-assisted surgery, there are geometric inaccuracies and significant challenges to workflow associated with (1-3) that impact widespread adoption. This thesis aims to address these challenges by using image registration to plan and guide robot- assisted surgical (RAS) systems to encourage greater adoption of robotic-assistance across surgical contexts (in this work, spinal neurosurgery, pulmonary interventions, and orthopaedic trauma). The proposed methods will also be compatible with diverse imaging and robotic platforms (including low-cost systems) to improve the accessibility of RAS systems for a wide range of hospital and use settings. This dissertation advances important components of image-guided, robot-assisted surgery, including: (1) automatic target planning using statistical models and surgeon-specific atlases for application in spinal neurosurgery; (2) intraoperative registration and guidance of a robot to the planning data using 3D-2D image registration (i.e., an “image-guided robot”) for assisting pelvic orthopaedic trauma; (3) advanced methods for intraoperative registration of planning data in deformable anatomy for guiding pulmonary interventions; and (4) extension of image-guided robotics in a piecewise rigid, multi-body context in which the robot directly manipulates anatomy for assisting ankle orthopaedic trauma

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures
    corecore