34 research outputs found
Bootstrapping Robotic Skill Learning With Intuitive Teleoperation: Initial Feasibility Study
Robotic skill learning has been increasingly studied but the demonstration
collections are more challenging compared to collecting images/videos in
computer vision and texts in natural language processing. This paper presents a
skill learning paradigm by using intuitive teleoperation devices to generate
high-quality human demonstrations efficiently for robotic skill learning in a
data-driven manner. By using a reliable teleoperation interface, the da Vinci
Research Kit (dVRK) master, a system called dVRK-Simulator-for-Demonstration
(dS4D) is proposed in this paper. Various manipulation tasks show the system's
effectiveness and advantages in efficiency compared to other interfaces. Using
the collected data for policy learning has been investigated, which verifies
the initial feasibility. We believe the proposed paradigm can facilitate robot
learning driven by high-quality demonstrations and efficiency while generating
them.Comment: 10 pages, 4 figures, accepted by ISER202
Robot Assisted Object Manipulation for Minimally Invasive Surgery
Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available.
In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy,
conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted
surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent
quality of procedures. Ultimately, allowing the surgeons to interpret the ample
and intelligent information from the system will enhance the surgical outcome and
positively reflect both on patients and society. Three main aspects are required to
introduce automation into surgery: the surgical robot must move with high precision,
have motion planning capabilities and understand the surgical scene. Besides
these main factors, depending on the type of surgery, there could be other aspects
that might play a fundamental role, to name some compliance, stiffness, etc. This
thesis addresses three technological challenges encountered when trying to achieve
the aforementioned goals, in the specific case of robot-object interaction. First,
how to overcome the inaccuracy of cable-driven systems when executing fine and
precise movements. Second, planning different tasks in dynamically changing environments.
Lastly, how the understanding of a surgical scene can be used to solve
more than one manipulation task.
To address the first challenge, a control scheme relying on accurate calibration is
implemented to execute the pick-up of a surgical needle. Regarding the planning of
surgical tasks, two approaches are explored: one is learning from demonstration to
pick and place a surgical object, and the second is using a gradient-based approach
to trigger a smoother object repositioning phase during intraoperative procedures.
Finally, to improve scene understanding, this thesis focuses on developing a simulation
environment where multiple tasks can be learned based on the surgical scene
and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully
able to autonomously pick up a suturing needle, position a surgical device for
intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ
retraction. Despite automation of surgical subtasks has been demonstrated in
this work, several challenges remain open, such as the capabilities of the generated
algorithm to generalise over different environment conditions and different patients
Accelerating Surgical Robotics Research: A Review of 10 Years With the da Vinci Research Kit
Robotic-assisted surgery is now well-established in clinical practice and has
become the gold standard clinical treatment option for several clinical
indications. The field of robotic-assisted surgery is expected to grow
substantially in the next decade with a range of new robotic devices emerging
to address unmet clinical needs across different specialities. A vibrant
surgical robotics research community is pivotal for conceptualizing such new
systems as well as for developing and training the engineers and scientists to
translate them into practice. The da Vinci Research Kit (dVRK), an academic and
industry collaborative effort to re-purpose decommissioned da Vinci surgical
systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical
robotics research, has been a key initiative for addressing a barrier to entry
for new research groups in surgical robotics. In this paper, we present an
extensive review of the publications that have been facilitated by the dVRK
over the past decade. We classify research efforts into different categories
and outline some of the major challenges and needs for the robotics community
to maintain this initiative and build upon it
Autonomous Surgical Robotics at Task and Subtask Levels
The revolution of minimally invasive procedures had a significant influence on surgical practice, opening the way to laparoscopic surgery, then evolving into robotics surgery. Teleoperated master-slave robots, such as the da Vinci Surgical System, has become a standard of care during the last few decades, performing over a million procedures
per year worldwide. Many believe that the next big step in the evolution of surgery is partial automation, which would ease the cognitive load on the surgeon, making them possible to pay more attention on the critical parts of the intervention. Partial and sequential introduction and increase of autonomous capabilities could provide a safe way towards Surgery 4.0. Unfortunately, autonomy in the given environment, consisting mostly of soft organs, suffers from grave difficulties. In this chapter, the current research directions of subtask automation in surgery are to be presented, introducing the recent advances in motion planning, perception, and human-machine interaction, along with the limitations of the task-level autonomy
Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation
In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is
required for subsurface visualisation to characterise the state of the tissue.
However, scanning of large tissue surfaces in the presence of deformation is a
challenging task for the surgeon. Recently, robot-assisted local tissue
scanning has been investigated for motion stabilisation of imaging probes to
facilitate the capturing of good quality images and reduce the surgeon's
cognitive load. Nonetheless, these approaches require the tissue surface to be
static or deform with periodic motion. To eliminate these assumptions, we
propose a visual servoing framework for autonomous tissue scanning, able to
deal with free-form tissue deformation. The 3D structure of the surgical scene
is recovered and a feature-based method is proposed to estimate the motion of
the tissue in real-time. A desired scanning trajectory is manually defined on a
reference frame and continuously updated using projective geometry to follow
the tissue motion and control the movement of the robotic arm. The advantage of
the proposed method is that it does not require the learning of the tissue
motion prior to scanning and can deal with free-form deformation. We deployed
this framework on the da Vinci surgical robot using the da Vinci Research Kit
(dVRK) for Ultrasound tissue scanning. Since the framework does not rely on
information from the Ultrasound data, it can be easily extended to other
probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
Task Dynamics of Prior Training Influence Visual Force Estimation Ability During Teleoperation
The lack of haptic feedback in Robot-assisted Minimally Invasive Surgery
(RMIS) is a potential barrier to safe tissue handling during surgery. Bayesian
modeling theory suggests that surgeons with experience in open or laparoscopic
surgery can develop priors of tissue stiffness that translate to better force
estimation abilities during RMIS compared to surgeons with no experience. To
test if prior haptic experience leads to improved force estimation ability in
teleoperation, 33 participants were assigned to one of three training
conditions: manual manipulation, teleoperation with force feedback, or
teleoperation without force feedback, and learned to tension a silicone sample
to a set of force values. They were then asked to perform the tension task, and
a previously unencountered palpation task, to a different set of force values
under teleoperation without force feedback. Compared to the teleoperation
groups, the manual group had higher force error in the tension task outside the
range of forces they had trained on, but showed better speed-accuracy functions
in the palpation task at low force levels. This suggests that the dynamics of
the training modality affect force estimation ability during teleoperation,
with the prior haptic experience accessible if formed under the same dynamics
as the task.Comment: 12 pages, 8 figure
Complementary Situational Awareness for an Intelligent Telerobotic Surgical Assistant System
Robotic surgical systems have contributed greatly to the advancement of Minimally Invasive Surgeries (MIS). More specifically, telesurgical robots have provided enhanced dexterity to surgeons performing MIS procedures. However, current robotic teleoperated systems have only limited situational awareness of the patient anatomy and surgical environment that would typically be available to a surgeon in an open surgery. Although the endoscopic view enhances the visualization of the anatomy, perceptual understanding of the environment and anatomy is still lacking due to the absence of sensory feedback.
In this work, these limitations are addressed by developing a computational framework to provide Complementary Situational Awareness (CSA) in a surgical assistant. This framework aims at improving the human-robot relationship by providing elaborate guidance and sensory feedback capabilities for the surgeon in complex MIS procedures. Unlike traditional teleoperation, this framework enables the user to telemanipulate the situational model in a virtual environment and uses that information to command the slave robot with appropriate admittance gains and environmental constraints. Simultaneously, the situational model is updated based on interaction of the slave robot with the task space environment.
However, developing such a system to provide real-time situational awareness requires that many technical challenges be met. To estimate intraoperative organ information continuous palpation primitives are required. Intraoperative surface information needs to be estimated in real-time while the organ is being palpated/scanned. The model of the task environment needs to be updated in near real-time using the estimated organ geometry so that the force-feedback applied on the surgeon's hand would correspond to the actual location of the model. This work presents a real-time framework that meets these requirements/challenges to provide situational awareness of the environment in the task space. Further, visual feedback is also provided for the surgeon/developer to view the near video frame rate updates of the task model. All these functions are executed in parallel and need to have a synchronized data exchange. The system is very portable and can be incorporated to any existing telerobotic platforms with minimal overhead