950 research outputs found

    ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY

    Get PDF
    Robot-assisted surgery offers improved accuracy, precision, safety, and workflow for a variety of surgical procedures spanning different surgical contexts (e.g., neurosurgery, pulmonary interventions, orthopaedics). These systems can assist with implant placement, drilling, bone resection, and biopsy while reducing human errors (e.g., hand tremors and limited dexterity) and easing the workflow of such tasks. Furthermore, such systems can reduce radiation dose to the clinician in fluoroscopically-guided procedures since many robots can perform their task in the imaging field-of-view (FOV) without the surgeon. Robot-assisted surgery requires (1) a preoperative plan defined relative to the patient that instructs the robot to perform a task, (2) intraoperative registration of the patient to transform the planning data into the intraoperative space, and (3) intraoperative registration of the robot to the patient to guide the robot to execute the plan. However, despite the operational improvements achieved using robot-assisted surgery, there are geometric inaccuracies and significant challenges to workflow associated with (1-3) that impact widespread adoption. This thesis aims to address these challenges by using image registration to plan and guide robot- assisted surgical (RAS) systems to encourage greater adoption of robotic-assistance across surgical contexts (in this work, spinal neurosurgery, pulmonary interventions, and orthopaedic trauma). The proposed methods will also be compatible with diverse imaging and robotic platforms (including low-cost systems) to improve the accessibility of RAS systems for a wide range of hospital and use settings. This dissertation advances important components of image-guided, robot-assisted surgery, including: (1) automatic target planning using statistical models and surgeon-specific atlases for application in spinal neurosurgery; (2) intraoperative registration and guidance of a robot to the planning data using 3D-2D image registration (i.e., an “image-guided robot”) for assisting pelvic orthopaedic trauma; (3) advanced methods for intraoperative registration of planning data in deformable anatomy for guiding pulmonary interventions; and (4) extension of image-guided robotics in a piecewise rigid, multi-body context in which the robot directly manipulates anatomy for assisting ankle orthopaedic trauma

    Evaluation of 3D C-arm fluoroscopy versus diagnostic CT for deep brain stimulation stereotactic registration and post-operative lead localization

    Get PDF
    Introduction: DBS efficacy depends on accuracy. CT-MRI fusion is established for both stereotactic registration and electrode placement verification. The desire to streamline DBS workflows, reduce operative time, and minimize patient transfers has increased interest in portable imaging modalities such as the Medtronic O-arm® and mobile CT. However, these remain expensive and bulky. 3D C-arm fluoroscopy (3DXT) units are a smaller and less costly alternative, albeit incompatible with traditional frame-based localization and without useful soft tissue resolution. We aimed to compare fusion of 3DXT and CT with pre-operative MRI to evaluate if 3DXT-MRI fusion alone is sufficient for accurate registration and reliable targeting verification. We further assess DBS targeting accuracy using a 3DXT workflow and compare radiation dosimetry between modalities. Methods: Patients underwent robot-assisted DBS implantation using a workflow incorporating 3DXT which we describe. Two intra-operative 3DXT spins were performed for registration and accuracy verification followed by conventional CT post-operatively. Post-operative 3DXT and CT images were independently fused to the same pre-operative MRI sequence and co-ordinates generated for comparison. Registration accuracy was compared to 15 consecutive controls who underwent CT-based registration. Radial targeting accuracy was calculated and radiation dosimetry recorded. Results: Data were obtained from 29 leads in 15 consecutive patients. 3DXT registration accuracy was significantly superior to CT with mean error 0.22 ± 0.03 mm (p < 0.0001). Mean Euclidean electrode tip position variation for CT to MRI versus 3DXT to MRI fusion was 0.62 ± 0.40 mm (range 0.0 mm–1.7 mm). In comparison, direct CT to 3DXT fusion showed electrode tip Euclidean variance of 0.23 ± 0.09 mm. Mean radial targeting accuracy assessed on 3DXT was 0.97 ± 0.54 mm versus 1.15 ± 0.55 mm on CT with differences insignificant (p = 0.30). Mean patient radiation doses were around 80% lower with 3DXT versus CT (p < 0.0001). Discussion: Mobile 3D C-arm fluoroscopy can be safely incorporated into DBS workflows for both registration and lead verification. For registration, the limited field of view requires the use of frameless transient fiducials and is highly accurate. For lead position verification based on MRI co-registration, we estimate there is around a 0.4 mm discrepancy between lead position seen on 3DXT versus CT when corrected for brain shift. This is similar to that described in O-arm® or mobile CT series. For units where logistical or financial considerations preclude the acquisition of a cone beam CT or mobile CT scanner, our data support portable 3D C-arm fluoroscopy as an acceptable alternative with significantly lower radiation exposure

    Medical robotics: where we come from, where we are and where we could go

    Full text link
    This short note presents a viewpoint about medical robotics

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Robot ontologies for sensor- and Image-guided surgery

    Get PDF
    Robots and robotics are becoming more com- plex and flexible, due to technological advancement, improved sensing capabilities and machine intelligence. Service robots target a wide range of applications, relying on advanced Human–Robot Interaction. Medical robotics is becoming a leading application area within, and the number of surgical, rehabilitation and hospital assistance robots is rising rapidly. However, the complexity of the medical environment has been a major barrier, preventing a wider use of robotic technology, thus mostly teleoperated, human-in-the-loop control solutions emerged so far. Providing smarter and better medical robots requires a systematic approach in describing and translating human processes for the robots. It is believed that ontologies can bridge human cognitive understanding and robotic reasoning (machine intelligence). Besides, ontologies serve as a tool and method to assess the added value robotic technology brings into the medical environment. The purpose of this paper is to identify relevant ontology research in medical robotics, and to review the state-of-the art. It focuses on the surgical domain, fundamental terminology and interactions are described for two example applications in neurosurgery and orthopaedics

    Augmented navigation

    Get PDF
    Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors. The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded. Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale. The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path. Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions. Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%]. In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied
    • …
    corecore