11 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Robotic Platforms for Ultrasound Diagnostics and Treatment

    Get PDF
    Medical imaging introduced the greatest paradigm change in the history of modern medicine, and particularly ultrasound (US) is becoming the most widespread imaging modality. The integration of digital imaging into the surgical domain opens new frontiers in diagnostics and intervention, and the combination of robotics leads to improved accuracy and targeting capabilities. This paper reviews the state-of-the-art in US-based robotic platforms, identifying the main research and clinical trends, reviewing current capabilities and limitations. The focus of the study includes non-autonomous US-based systems, US-based automated robotic navigation systems and US-guided autonomous tools. These areas outline future development, projecting a swarm of new applications in the computer-assisted surgical domain

    Towards Autonomous Robotic Minimally Invasive Ultrasound Scanning and Vessel Reconstruction on Non-Planar Surfaces

    Get PDF
    Autonomous robotic Ultrasound (US) scanning has been the subject of research for more than 2 decades. However, little work has been done to apply this concept into a minimally invasive setting, in which accurate force sensing is generally not available and robot kinematics are unreliable due to the tendon-driven, compliant robot structure. As a result, the adequate orientation of the probe towards the tissue surface remains unknown and the anatomy reconstructed from scan may become highly inaccurate. In this work we present solutions to both of these challenges: an attitude sensor fusion scheme for improved kinematic sensing and a visual, deep learning based algorithm to establish and maintain contact between the organ surface and the US probe. We further introduce a novel scheme to estimate and orient the probe perpendicular to the center line of a vascular structure. Our approach enables, for the first time, to autonomously scan across a non-planar surface and navigate along an anatomical structure with a robotically guided minimally invasive US probe. Our experiments on a vessel phantom with a convex surface confirm a significant improvement of the reconstructed curved vessel geometry, with our approach strongly reducing the mean positional error and variance. In the future, our approach could help identify vascular structures more effectively and help pave the way towards semi-autonomous assistance during partial hepatectomy and the potential to reduce procedure length and complication rates

    Wearable AR and 3D Ultrasound: Towards a Novel Way to Guide Surgical Dissections

    Get PDF
    Nowadays, ultrasound (US) is increasingly being chosen as imaging modality for both diagnostic and interventional applications, owing to its positive characteristics in terms of safety, low footprint, and low cost. The combination of this imaging modality with wearable augmented reality (AR) systems, such as the head-mounted displays (HMD), comes forward as a breakthrough technological solution, as it allows for hands-free interaction with the augmented scene, which is an essential requirement for the execution of high-precision manual tasks, such as in surgery. What we propose in this study is the integration of an AR navigation system (HMD plus dedicated platform) with a 3D US imaging system to guide a dissection task that requires maintaining safety margins with respect to unexposed anatomical or pathological structures. For this purpose, a standard scalpel was sensorized to provide real-time feedback on the position of the instrument during the execution of the task. The accuracy of the system was quantitatively assessed with two different experimental studies: a targeting experiment, which revealed a median error of 2.53 mm in estimating the scalpel to target distance, and a preliminary user study simulating a dissection task that requires reaching a predefined distance to an occult lesion. The second experiment results showed that the system can be used to guide a dissection task with a mean accuracy of 0.65 mm, with a mean angular error between the ideal and actual cutting plane of 2.07°. The results encourage further studies to fully exploit the potential of wearable AR and intraoperative US imaging to accurately guide deep surgical tasks, such as to guide the excision of non-palpable breast tumors ensuring optimal margin clearance

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Investigating Ultrasound-Guided Autonomous Assistance during Robotic Minimally Invasive Surgery

    Get PDF
    Despite it being over twenty years since the first introduction of robotic surgical systems in common surgical practice, they are still far from widespread across all healthcare systems, surgical disciplines and procedures. At the same time, the systems that are used act as mere tele-manipulators with motion scaling and have yet to make use of the immense potential of their sensory data in providing autonomous assistance during surgery or perform tasks themselves in a semi-autonomous fashion. Equivalently, the potential of using intracorporeal imaging, particularly Ultrasound (US) during surgery for improved tumour localisation remains largely unused. Aside from the cost factors, this also has to do with the necessity of adequate training for scan interpretation and the difficulty of handling an US probe near the surgical sight. Additionally, the potential for automation that is being explored in extracorporeal US using serial manipulators does not yet translate into ultrasound-enabled autonomous assistance in a surgical robotic setting. Motivated by this research gap, this work explores means to enable autonomous intracorporeal ultrasound in a surgical robotic setting. Based around the the da Vinci Research Kit (dVRK), it first develops a surgical robotics platform that allows for precise evaluation of the robot’s performance using Infrared (IR) tracking technology. Based on this initial work, it then explores the possibility to provide autonomous ultrasound guidance during surgery. Therefore, it develops and assesses means to improve kinematic accuracy despite manipulator backlash as well as enabling adequate probe position with respect to the tissue surface and anatomy. Founded on the acquired anatomical information, this thesis explores the integration of a second robotic arm and its usage for autonomous assistance. Starting with an autonomously acquired tumor scan, the setup is extended and methods devised to enable the autonomous marking of margined tumor boundaries on the tissue surface both in a phantom as well as in an ex-vivo experiment on porcine liver. Moving towards increased autonomy, a novel minimally invasive High Intensity Focused Ultrasound (HIFUS) transducer is integrated into the robotic setup including a sensorised, water-filled membrane for sensing interaction forces with the tissue surface. For this purpose an extensive material characterisation is caried out, exploring different surface material pairings. Finally, the proposed system, including trajectory planning and a hybrid-force position control scheme are evaluated in a benchtop ultrasound phantom trial

    Visual Tracking of Instruments in Minimally Invasive Surgery

    Get PDF
    Reducing access trauma has been a focal point for modern surgery and tackling the challenges that arise from new operating techniques and instruments is an exciting and open area of research. Lack of awareness and control from indirect manipulation and visualization has created a need to augment the surgeon's understanding and perception of how their instruments interact with the patient's anatomy but current methods of achieving this are inaccurate and difficult to integrate into the surgical workflow. Visual methods have the potential to recover the position and orientation of the instruments directly in the reference frame of the observing camera without the need to introduce additional hardware to the operating room and perform complex calibration steps. This thesis explores how this problem can be solved with the fusion of coarse region and fine scale point features to enable the recovery of both the rigid and articulated degrees of freedom of laparoscopic and robotic instruments using only images provided by the surgical camera. Extensive experiments on different image features are used to determine suitable representations for reliable and robust pose estimation. Using this information a novel framework is presented which estimates 3D pose with a region matching scheme while using frame-to-frame optical flow to account for challenges due to symmetry in the instrument design. The kinematic structure of articulated robotic instruments is also used to track the movement of the head and claspers. The robustness of this method was evaluated on calibrated ex-vivo images and in-vivo sequences and comparative studies are performed with state-of-the-art kinematic assisted tracking methods
    corecore