11 research outputs found

    Towards Biomechanics-Aware Design of a Steerable Drilling Robot for Spinal Fixation Procedures with Flexible Pedicle Screws

    Full text link
    Towards reducing the failure rate of spinal fixation surgical procedures in osteoporotic patients, we propose a unique biomechanically-aware framework for the design of a novel concentric tube steerable drilling robot (CT-SDR). The proposed framework leverages a patient-specific finite element (FE) biomechanics model developed based on Quantitative Computed Tomography (QCT) scans of the patient's vertebra to calculate a biomechanically-optimal and feasible drilling and implantation trajectory. The FE output is then used as a design requirement for the design and evaluation of the CT-SDR. Providing a balance between the necessary flexibility to create curved optimal trajectories obtained by the FE module with the required strength to not buckle during drilling through a hard simulated bone material, we showed that the CT-SDR can reliably recreate this drilling trajectory with errors between 1.7-2.2%Comment: 6 pages, 7 figures, Accepted for Publication at the 2023 International Symposium on Medical Robotic

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    The radiological investigation of musculoskeletal tumours : chairperson's introduction

    No full text
    corecore