560 research outputs found
Soft Tissue Simulation Environment to Learn Manipulation Tasks in Autonomous Robotic Surgery
Reinforcement Learning (RL) methods have demonstrated promising results for the automation of subtasks in surgical robotic systems. Since many trial and error attempts are required to learn the optimal control policy, RL agent training can be performed in simulation and the learned behavior can be then deployed in real environments. In this work, we introduce an open-source simulation environment providing support for position based dynamics soft bodies simulation and state-of-the-art RL methods. We demonstrate the capabilities of the proposed framework by training an RL agent based on Proximal Policy Optimization in fat tissue manipulation for tumor exposure during a nephrectomy procedure. Leveraging on a preliminary optimization of the simulation parameters, we show that our agent is able to learn the task on a virtual replica of the anatomical environment. The learned behavior is robust to changes in the initial end-effector position. Furthermore, we show that the learned policy can be directly deployed on the da Vinci Research Kit, which is able to execute the trajectories generated by the RL agent. The proposed simulation environment represents an essential component for the development of next-generation robotic systems, where the interaction with the deformable anatomical environment is involved
UnityFlexML: Training Reinforcement Learning Agents in a Simulated Surgical Environment
Sim-to-real Deep Reinforcement Learning (DRL) has shown promising in subtasks automation for surgical robotic systems, since it allows to safely perform all the trial and error attempts needed to learn the optimal control policy. However, a realistic simulation environment is essential to guarantee direct transfer of the learnt policy from the simulated to the real system. In this work, we introduce UnityFlexML, an open-source framework providing support for soft bodies simulation and state-of-the-art DRL methods. We demonstrate that a DRL agent can be successfully trained within UnityFlexML to manipulate deformable fat tissues for tumor exposure during a nephrectomy procedure. Furthermore, we show that the learned policy can be directly deployed on the da Vinci Research Kit, which is able to execute the trajectories generated by the DRL agent. The proposed framework represents an essential component for the development of autonomous robotic systems, where the interaction with the deformable anatomical environment is involved
Deep Reinforcement Learning in Surgical Robotics: Enhancing the Automation Level
Surgical robotics is a rapidly evolving field that is transforming the
landscape of surgeries. Surgical robots have been shown to enhance precision,
minimize invasiveness, and alleviate surgeon fatigue. One promising area of
research in surgical robotics is the use of reinforcement learning to enhance
the automation level. Reinforcement learning is a type of machine learning that
involves training an agent to make decisions based on rewards and punishments.
This literature review aims to comprehensively analyze existing research on
reinforcement learning in surgical robotics. The review identified various
applications of reinforcement learning in surgical robotics, including
pre-operative, intra-body, and percutaneous procedures, listed the typical
studies, and compared their methodologies and results. The findings show that
reinforcement learning has great potential to improve the autonomy of surgical
robots. Reinforcement learning can teach robots to perform complex surgical
tasks, such as suturing and tissue manipulation. It can also improve the
accuracy and precision of surgical robots, making them more effective at
performing surgeries
Surgical Gym: A high-performance GPU-based platform for reinforcement learning with surgical robots
Recent advances in robot-assisted surgery have resulted in progressively more
precise, efficient, and minimally invasive procedures, sparking a new era of
robotic surgical intervention. This enables doctors, in collaborative
interaction with robots, to perform traditional or minimally invasive surgeries
with improved outcomes through smaller incisions. Recent efforts are working
toward making robotic surgery more autonomous which has the potential to reduce
variability of surgical outcomes and reduce complication rates. Deep
reinforcement learning methodologies offer scalable solutions for surgical
automation, but their effectiveness relies on extensive data acquisition due to
the absence of prior knowledge in successfully accomplishing tasks. Due to the
intensive nature of simulated data collection, previous works have focused on
making existing algorithms more efficient. In this work, we focus on making the
simulator more efficient, making training data much more accessible than
previously possible. We introduce Surgical Gym, an open-source high performance
platform for surgical robot learning where both the physics simulation and
reinforcement learning occur directly on the GPU. We demonstrate between
100-5000x faster training times compared with previous surgical learning
platforms. The code is available at:
https://github.com/SamuelSchmidgall/SurgicalGym
Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning
In the context of surgery, robots can provide substantial assistance by
performing small, repetitive tasks such as suturing, needle exchange, and
tissue retraction, thereby enabling surgeons to concentrate on more complex
aspects of the procedure. However, existing surgical task learning mainly
pertains to rigid body interactions, whereas the advancement towards more
sophisticated surgical robots necessitates the manipulation of soft bodies.
Previous work focused on tissue phantoms for soft tissue task learning, which
can be expensive and can be an entry barrier to research. Simulation
environments present a safe and efficient way to learn surgical tasks before
their application to actual tissue. In this study, we create a Robot Operating
System (ROS)-compatible physics simulation environment with support for both
rigid and soft body interactions within surgical tasks. Furthermore, we
investigate the soft tissue interactions facilitated by the patient-side
manipulator of the DaVinci surgical robot. Leveraging the pybullet physics
engine, we simulate kinematics and establish anchor points to guide the robotic
arm when manipulating soft tissue. Using demonstration-guided reinforcement
learning (RL) algorithms, we investigate their performance in comparison to
traditional reinforcement learning algorithms. Our in silico trials demonstrate
a proof-of-concept for autonomous surgical soft tissue retraction. The results
corroborate the feasibility of learning soft body manipulation through the
application of reinforcement learning agents. This work lays the foundation for
future research into the development and refinement of surgical robots capable
of managing both rigid and soft tissue interactions. Code is available at
https://github.com/amritpal-001/tissue_retract.Comment: 10 Pages, 5 figures, MICCAI 2023 conference (AECAI workshop
DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects
Applications in fields ranging from home care to warehouse fulfillment to
surgical assistance require robots to reliably manipulate the shape of 3D
deformable objects. Analytic models of elastic, 3D deformable objects require
numerous parameters to describe the potentially infinite degrees of freedom
present in determining the object's shape. Previous attempts at performing 3D
shape control rely on hand-crafted features to represent the object shape and
require training of object-specific control models. We overcome these issues
through the use of our novel DeformerNet neural network architecture, which
operates on a partial-view point cloud of the manipulated object and a point
cloud of the goal shape to learn a low-dimensional representation of the object
shape. This shape embedding enables the robot to learn a visual servo
controller that computes the desired robot end-effector action to iteratively
deform the object toward the target shape. We demonstrate both in simulation
and on a physical robot that DeformerNet reliably generalizes to object shapes
and material stiffness not seen during training. Crucially, using DeformerNet,
the robot successfully accomplishes three surgical sub-tasks: retraction
(moving tissue aside to access a site underneath it), tissue wrapping (a
sub-task in procedures like aortic stent placements), and connecting two
tubular pieces of tissue (a sub-task in anastomosis).Comment: Submitted to IEEE Transactions on Robotics (T-RO). 18 pages, 25
figures. arXiv admin note: substantial text overlap with arXiv:2110.0468
Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning
Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons
Patient-specific simulation for autonomous surgery
An Autonomous Robotic Surgical System (ARSS) has to interact with the complex anatomical environment, which is deforming and whose properties are often uncertain. Within this context, an ARSS can benefit from the availability of patient-specific simulation of the anatomy. For example, simulation can provide a safe and controlled environment for the design, test and validation of the autonomous capabilities. Moreover, it can be used to generate large amounts of patient-specific data that can be exploited to learn models and/or tasks. The aim of this Thesis is to investigate the different ways in which simulation can support an ARSS and to propose solutions to favor its employability in robotic surgery. We first address all the phases needed to create such a simulation, from model choice in the pre-operative phase based on the available knowledge to its intra-operative update to compensate for inaccurate parametrization. We propose to rely on deep neural networks trained with synthetic data both to generate a patient-specific model and to design a strategy to update model parametrization starting directly from intra-operative sensor data. Afterwards, we test how simulation can assist the ARSS, both for task learning and during task execution. We show that simulation can be used to efficiently train approaches that require multiple interactions with the environment, compensating for the riskiness to acquire data from real surgical robotic systems. Finally, we propose a modular framework for autonomous surgery that includes deliberative functions to handle real anatomical environments with uncertain parameters. The integration of a personalized simulation proves fundamental both for optimal task planning and to enhance and monitor real execution. The contributions presented in this Thesis have the potential to introduce significant step changes in the development and actual performance of autonomous robotic surgical systems, making them closer to applicability to real clinical conditions
- …