16 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet

    Sensorless Motion Planning for Medical Needle Insertion in Deformable Tissues

    Get PDF
    Minimally invasive medical procedures such as biopsies, anesthesia drug injections, and brachytherapy cancer treatments require inserting a needle to a specific target inside soft tissues. This is difficult because needle insertion displaces and deforms the surrounding soft tissues causing the target to move during the procedure. To facilitate physician training and preoperative planning for these procedures, we develop a needle insertion motion planning system based on an interactive simulation of needle insertion in deformable tissues and numerical optimization to reduce placement error. We describe a 2-D physically based, dynamic simulation of needle insertion that uses a finite-element model of deformable soft tissues and models needle cutting and frictional forces along the needle shaft. The simulation offers guarantees on simulation stability for mesh modications and achieves interactive, real-time performance on a standard PC. Using texture mapping, the simulation provides visualization comparable to ultrasound images that the physician would see during the procedure. We use the simulation as a component of a sensorless planning algorithm that uses numerical optimization to compute needle insertion offsets that compensate for tissue deformations. We apply the method to radioactive seed implantation during permanent seed prostate brachytherapy to minimize seed placement error

    Complementary Situational Awareness for an Intelligent Telerobotic Surgical Assistant System

    Get PDF
    Robotic surgical systems have contributed greatly to the advancement of Minimally Invasive Surgeries (MIS). More specifically, telesurgical robots have provided enhanced dexterity to surgeons performing MIS procedures. However, current robotic teleoperated systems have only limited situational awareness of the patient anatomy and surgical environment that would typically be available to a surgeon in an open surgery. Although the endoscopic view enhances the visualization of the anatomy, perceptual understanding of the environment and anatomy is still lacking due to the absence of sensory feedback. In this work, these limitations are addressed by developing a computational framework to provide Complementary Situational Awareness (CSA) in a surgical assistant. This framework aims at improving the human-robot relationship by providing elaborate guidance and sensory feedback capabilities for the surgeon in complex MIS procedures. Unlike traditional teleoperation, this framework enables the user to telemanipulate the situational model in a virtual environment and uses that information to command the slave robot with appropriate admittance gains and environmental constraints. Simultaneously, the situational model is updated based on interaction of the slave robot with the task space environment. However, developing such a system to provide real-time situational awareness requires that many technical challenges be met. To estimate intraoperative organ information continuous palpation primitives are required. Intraoperative surface information needs to be estimated in real-time while the organ is being palpated/scanned. The model of the task environment needs to be updated in near real-time using the estimated organ geometry so that the force-feedback applied on the surgeon's hand would correspond to the actual location of the model. This work presents a real-time framework that meets these requirements/challenges to provide situational awareness of the environment in the task space. Further, visual feedback is also provided for the surgeon/developer to view the near video frame rate updates of the task model. All these functions are executed in parallel and need to have a synchronized data exchange. The system is very portable and can be incorporated to any existing telerobotic platforms with minimal overhead

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Image Guidance in Telemanipulator Assisted Urology Surgery

    Get PDF
    This thesis outlines the development of an image guided surgery system, intended for use in \davinci assisted radical prostatectomy but more generally applicable to laparoscopic urology surgery. We defined the key performance parameter of the system as the accuracy of overlaying modelled anatomy onto the surgical scene. This thesis is primarily concerned with determining the system accuracy based on an analysis of the system's components. A common error measure was defined for all system components. This is an on screen error (measured in pixels) based on the error in projecting a single point lying near the apex of the prostate with the endoscope in a typical surgical pose. In this case the projected point was approximately 200 mm from the endoscope lens. An intraoperative coordinate system is first defined as the coordinate system of an optical tracking system used to track the endoscope. The MRI image of the patient is transformed into the intraoperative coordinate system. Prior to surgery the endoscope is calibrated and during surgery the endoscope is tracked, defining a transform from the coordinates of the optical tracking system to the endoscope screen. This transform is used to project the MRI image onto the endoscope video display. The early part of the thesis describes a novel algorithm for registering MRI to ultrasound images of the bone which was used to put the MRI image into the intraoperative coordinate system. Using this algorithm avoids the need for fiducial markers. The table below shows the errors (as on screen pixel RMS) due to using this algorithm. An approximate value as RMS distance error at the prostate apex point is also included

    Investigating Ultrasound-Guided Autonomous Assistance during Robotic Minimally Invasive Surgery

    Get PDF
    Despite it being over twenty years since the first introduction of robotic surgical systems in common surgical practice, they are still far from widespread across all healthcare systems, surgical disciplines and procedures. At the same time, the systems that are used act as mere tele-manipulators with motion scaling and have yet to make use of the immense potential of their sensory data in providing autonomous assistance during surgery or perform tasks themselves in a semi-autonomous fashion. Equivalently, the potential of using intracorporeal imaging, particularly Ultrasound (US) during surgery for improved tumour localisation remains largely unused. Aside from the cost factors, this also has to do with the necessity of adequate training for scan interpretation and the difficulty of handling an US probe near the surgical sight. Additionally, the potential for automation that is being explored in extracorporeal US using serial manipulators does not yet translate into ultrasound-enabled autonomous assistance in a surgical robotic setting. Motivated by this research gap, this work explores means to enable autonomous intracorporeal ultrasound in a surgical robotic setting. Based around the the da Vinci Research Kit (dVRK), it first develops a surgical robotics platform that allows for precise evaluation of the robot’s performance using Infrared (IR) tracking technology. Based on this initial work, it then explores the possibility to provide autonomous ultrasound guidance during surgery. Therefore, it develops and assesses means to improve kinematic accuracy despite manipulator backlash as well as enabling adequate probe position with respect to the tissue surface and anatomy. Founded on the acquired anatomical information, this thesis explores the integration of a second robotic arm and its usage for autonomous assistance. Starting with an autonomously acquired tumor scan, the setup is extended and methods devised to enable the autonomous marking of margined tumor boundaries on the tissue surface both in a phantom as well as in an ex-vivo experiment on porcine liver. Moving towards increased autonomy, a novel minimally invasive High Intensity Focused Ultrasound (HIFUS) transducer is integrated into the robotic setup including a sensorised, water-filled membrane for sensing interaction forces with the tissue surface. For this purpose an extensive material characterisation is caried out, exploring different surface material pairings. Finally, the proposed system, including trajectory planning and a hybrid-force position control scheme are evaluated in a benchtop ultrasound phantom trial

    Mixed-reality visualization environments to facilitate ultrasound-guided vascular access

    Get PDF
    Ultrasound-guided needle insertions at the site of the internal jugular vein (IJV) are routinely performed to access the central venous system. Ultrasound-guided insertions maintain high rates of carotid artery puncture, as clinicians rely on 2D information to perform a 3D procedure. The limitations of 2D ultrasound-guidance motivated the research question: “Do 3D ultrasound-based environments improve IJV needle insertion accuracy”. We addressed this by developing advanced surgical navigation systems based on tracked surgical tools and ultrasound with various visualizations. The point-to-line ultrasound calibration enables the use of tracked ultrasound. We automated the fiducial localization required for this calibration method such that fiducials can be automatically localized within 0.25 mm of the manual equivalent. The point-to-line calibration obtained with both manual and automatic localizations produced average normalized distance errors less than 1.5 mm from point targets. Another calibration method was developed that registers an optical tracking system and the VIVE Pro head-mounted display (HMD) tracking system with sub-millimetre and sub-degree accuracy compared to ground truth values. This co-calibration enabled the development of an HMD needle navigation system, in which the calibrated ultrasound image and tracked models of the needle, needle trajectory, and probe were visualized in the HMD. In a phantom experiment, 31 clinicians had a 96 % success rate using the HMD system compared to 70 % for the ultrasound-only approach (p= 0.018). We developed a machine-learning-based vascular reconstruction pipeline that automatically returns accurate 3D reconstructions of the carotid artery and IJV given sequential tracked ultrasound images. This reconstruction pipeline was used to develop a surgical navigation system, where tracked models of the needle, needle trajectory, and the 3D z-buffered vasculature from a phantom were visualized in a common coordinate system on a screen. This system improved the insertion accuracy and resulted in 100 % success rates compared to 70 % under ultrasound-guidance (p=0.041) across 20 clinicians during the phantom experiment. Overall, accurate calibrations and machine learning algorithms enable the development of advanced 3D ultrasound systems for needle navigation, both in an immersive first-person perspective and on a screen, illustrating that 3D US environments outperformed 2D ultrasound-guidance used clinically

    Image-guided and adaptive radiation therapy with 3D ultrasound imaging

    Get PDF
    corecore