1,189 research outputs found

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    sCAM: An Untethered Insertable Laparoscopic Surgical Camera Robot

    Get PDF
    Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational interference but also reduce camera mobility. Meanwhile, these insertable laparoscopic cameras are manipulated without any pose information or haptic feedback, which results in open loop motion control and raises concerns about surgical safety caused by inappropriate use of force.This dissertation proposes, implements, and validates an untethered insertable laparoscopic surgical camera (sCAM) robot. Contributions presented in this work include: (1) feasibility of an untethered fully insertable laparoscopic surgical camera, (2) camera-tissue interaction characterization and force sensing, (3) pose estimation, visualization, and feedback with sCAM, and (4) robotic-assisted closed-loop laparoscopic camera control. Borrowing the principle of spherical motors, camera anchoring and actuation are achieved through transabdominal magnetic coupling in a stator-rotor manner. To avoid the tethering wires, laparoscopic vision and control communication are realized with dedicated wireless links based on onboard power. A non-invasive indirect approach is proposed to provide real-time camera-tissue interaction force measurement, which, assisted by camera-tissue interaction modeling, predicts stress distribution over the tissue surface. Meanwhile, the camera pose is remotely estimated and visualized using complementary filtering based on onboard motion sensing. Facilitated by the force measurement and pose estimation, robotic-assisted closed-loop control has been realized in a double-loop control scheme with shared autonomy between surgeons and the robotic controller.The sCAM has brought robotic laparoscopic imaging one step further toward less invasiveness and more dexterity. Initial ex vivo test results have verified functions of the implemented sCAM design and the proposed force measurement and pose estimation approaches, demonstrating the technical feasibility of a tetherless insertable laparoscopic camera. Robotic-assisted control has shown its potential to free surgeons from low-level intricate camera manipulation workload and improve precision and intuitiveness in laparoscopic imaging

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Skill-based human-robot cooperation in tele-operated path tracking

    Get PDF
    This work proposes a shared-control tele-operation framework that adapts its cooperative properties to the estimated skill level of the operator. It is hypothesized that different aspects of an operatorâ\u80\u99s performance in executing a tele-operated path tracking task can be assessed through conventional machine learning methods using motion-based and task-related features. To identify performance measures that capture motor skills linked to the studied task, an experiment is conducted where users new to tele-operation, practice towards motor skill proficiency in 7 training sessions. A set of classifiers are then learned from the acquired data and selected features, which can generate a skill profile that comprises estimations of userâ\u80\u99s various competences. Skill profiles are exploited to modify the behavior of the assistive robotic system accordingly with the objective of enhancing user experience by preventing unnecessary restriction for skilled users. A second experiment is implemented in which novice and expert users execute the path tracking on different pathways while being assisted by the robot according to their estimated skill profiles. Results validate the skill estimation method and hint at feasibility of shared-control customization in tele-operated path tracking

    Artificial intelligence surgery: how do we get to autonomous actions in surgery?

    Get PDF
    Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner

    Robot-Assisted Minimally Invasive Surgery-Surgical Robotics in the Data Age

    Get PDF
    Telesurgical robotics, as a technical solution for robot-assisted minimally invasive surgery (RAMIS), has become the first domain within medicosurgical robotics that achieved a true global clinical adoption. Its relative success (still at a low single-digit percentile total market penetration) roots in the particular human-in-the-loop control, in which the trained surgeon is always kept responsible for the clinical outcome achieved by the robot-actuated invasive tools. Nowadays, this paradigm is challenged by the need for improved surgical performance, traceability, and safety reaching beyond the human capabilities. Partially due to the technical complexity and the financial burden, the adoption of telesurgical robotics has not reached its full potential, by far. Apart from the absolutely market-dominating da Vinci surgical system, there are already 60+ emerging RAMIS robot types, out of which 15 have already achieved some form of regulatory clearance. This article aims to connect the technological advancement with the principles of commercialization, particularly looking at engineering components that are under development and have the potential to bring significant advantages to the clinical practice. Current RAMIS robots often do not exceed the functionalities deriving from their mechatronics, due to the lack of data-driven assistance and smart human–machine collaboration. Computer assistance is gradually gaining more significance within emerging RAMIS systems. Enhanced manipulation capabilities, refined sensors, advanced vision, task-level automation, smart safety features, and data integration mark together the inception of a new era in telesurgical robotics, infiltrated by machine learning (ML) and artificial intelligence (AI) solutions. Observing other domains, it is definite that a key requirement of a robust AI is the good quality data, derived from proper data acquisition and sharing to allow building solutions in real time based on ML. Emerging RAMIS technologies are reviewed both in a historical and a future perspective

    Portable dVRK: an augmented V-REP simulator of the da Vinci Research Kit

    Get PDF
    The da Vinci Research Kit (dVRK) is a first generation da Vinci robot repurposed as a research platform and coupled with software and controllers developed by research users. An already quite wide community is currently sharing the dVRK (32 systems in 28 sites worldwide). The access to the robotic system for training surgeons and for developing new surgical procedures, tools and new control modalities is still difficult due to the limited availability and high maintenance costs. The development of simulation tools provides a low cost, easy and safe alternative to the use of the real platform for preliminary research and training activities. The Portable dVRK, which is described in this work, is based on a V-REP simulator of the dVRK patient side and endoscopic camera manipulators which are controlled through two haptic interfaces and a 3D viewer, respectively. The V-REP simulator is augmented with a physics engine allowing to render the interaction of new developed tools with soft objects. Full integration in the ROS control architecture makes the simulator flexible and easy to be interfaced with other possible devices. Several scenes have been implemented to illustrate performance and potentials of the developed simulator

    Extreme Telesurgery

    Get PDF
    • …
    corecore