511 research outputs found
Fast and Reliable Autonomous Surgical Debridement with Cable-Driven Robots Using a Two-Phase Calibration Procedure
Automating precision subtasks such as debridement (removing dead or diseased
tissue fragments) with Robotic Surgical Assistants (RSAs) such as the da Vinci
Research Kit (dVRK) is challenging due to inherent non-linearities in
cable-driven systems. We propose and evaluate a novel two-phase coarse-to-fine
calibration method. In Phase I (coarse), we place a red calibration marker on
the end effector and let it randomly move through a set of open-loop
trajectories to obtain a large sample set of camera pixels and internal robot
end-effector configurations. This coarse data is then used to train a Deep
Neural Network (DNN) to learn the coarse transformation bias. In Phase II
(fine), the bias from Phase I is applied to move the end-effector toward a
small set of specific target points on a printed sheet. For each target, a
human operator manually adjusts the end-effector position by direct contact
(not through teleoperation) and the residual compensation bias is recorded.
This fine data is then used to train a Random Forest (RF) to learn the fine
transformation bias. Subsequent experiments suggest that without calibration,
position errors average 4.55mm. Phase I can reduce average error to 2.14mm and
the combination of Phase I and Phase II can reduces average error to 1.08mm. We
apply these results to debridement of raisins and pumpkin seeds as fragment
phantoms. Using an endoscopic stereo camera with standard edge detection,
experiments with 120 trials achieved average success rates of 94.5%, exceeding
prior results with much larger fragments (89.4%) and achieving a speedup of
2.1x, decreasing time per fragment from 15.8 seconds to 7.3 seconds. Source
code, data, and videos are available at
https://sites.google.com/view/calib-icra/.Comment: Code, data, and videos are available at
https://sites.google.com/view/calib-icra/. Final version for ICRA 201
Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature
© 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe
Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation
In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is
required for subsurface visualisation to characterise the state of the tissue.
However, scanning of large tissue surfaces in the presence of deformation is a
challenging task for the surgeon. Recently, robot-assisted local tissue
scanning has been investigated for motion stabilisation of imaging probes to
facilitate the capturing of good quality images and reduce the surgeon's
cognitive load. Nonetheless, these approaches require the tissue surface to be
static or deform with periodic motion. To eliminate these assumptions, we
propose a visual servoing framework for autonomous tissue scanning, able to
deal with free-form tissue deformation. The 3D structure of the surgical scene
is recovered and a feature-based method is proposed to estimate the motion of
the tissue in real-time. A desired scanning trajectory is manually defined on a
reference frame and continuously updated using projective geometry to follow
the tissue motion and control the movement of the robotic arm. The advantage of
the proposed method is that it does not require the learning of the tissue
motion prior to scanning and can deal with free-form deformation. We deployed
this framework on the da Vinci surgical robot using the da Vinci Research Kit
(dVRK) for Ultrasound tissue scanning. Since the framework does not rely on
information from the Ultrasound data, it can be easily extended to other
probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
daVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery
This paper presents a technique to concurrently and jointly predict the future trajectories of surgical instruments and the future state(s) of surgical subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such predictions are a necessary first step towards shared control and supervised autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing or ultrasound scanning, often have distinguishable tool kinematics and visual features, and can be described as a series of fine-grained states with transition schematics. We propose daVinciNet - an end-to-end dual-task model for robot motion and surgical state predictions. daVinciNet performs concurrent end-effector trajectory and surgical state predictions using features extracted from multiple data streams, including robot kinematics, endoscopic vision, and system events. We evaluate our proposed model on an extended Robotic Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi surgical system and the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11% long-term (2s) state prediction accuracy, as well as 1.07mm short-term and 5.62mm long-term trajectory prediction error
daVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery
This paper presents a technique to concurrently and jointly predict the
future trajectories of surgical instruments and the future state(s) of surgical
subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such
predictions are a necessary first step towards shared control and supervised
autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing
or ultrasound scanning, often have distinguishable tool kinematics and visual
features, and can be described as a series of fine-grained states with
transition schematics. We propose daVinciNet - an end-to-end dual-task model
for robot motion and surgical state predictions. daVinciNet performs concurrent
end-effector trajectory and surgical state predictions using features extracted
from multiple data streams, including robot kinematics, endoscopic vision, and
system events. We evaluate our proposed model on an extended Robotic
Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi
surgical system and the JHU-ISI Gesture and Skill Assessment Working Set
(JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11%
long-term (2s) state prediction accuracy, as well as 1.07mm short-term and
5.62mm long-term trajectory prediction error.Comment: Accepted to IROS 202
Magnetic Surgical Instruments for Robotic Abdominal Surgery.
This review looks at the implementation of magnetic-based approaches in surgical instruments for abdominal surgeries. As abdominal surgical techniques advance toward minimizing surgical trauma, surgical instruments are enhanced to support such an objective through the exploration of magnetic-based systems. With this design approach, surgical devices are given the capabilities to be fully inserted intraabdominally to achieve access to all abdominal quadrants, without the conventional rigid link connection with the external unit. The variety of intraabdominal surgical devices are anchored, guided, and actuated by external units, with power and torque transmitted across the abdominal wall through magnetic linkage. This addresses many constraints encountered by conventional laparoscopic tools, such as loss of triangulation, fulcrum effect, and loss/lack of dexterity for surgical tasks. Design requirements of clinical considerations to aid the successful development of magnetic surgical instruments, are also discussed
Modeling, Analysis, Force Sensing and Control of Continuum Robots for Minimally Invasive Surgery
This dissertation describes design, modeling and application of continuum robotics for surgical applications, specifically parallel continuum robots (PCRs) and concentric tube manipulators (CTMs). The introduction of robotics into surgical applications has allowed for a greater degree of precision, less invasive access to more remote surgical sites, and user-intuitive interfaces with enhanced vision systems. The most recent developments have been in the space of continuum robots, whose exible structure create an inherent safety factor when in contact with fragile tissues. The design challenges that exist involve balancing size and strength of the manipulators, controlling the manipulators over long transmission pathways, and incorporating force sensing and feedback from the manipulators to the user.
Contributions presented in this work include: (1) prototyping, design, force sensing, and force control investigations of PCRs, and (2) prototyping of a concentric tube manipulator for use in a standard colonoscope. A general kinetostatic model is presented for PCRs along with identification of multiple physical constraints encountered in design and construction. Design considerations and manipulator capabilities are examined in the form of matrix metrics and ellipsoid representations. Finally, force sensing and control are explored and experimental results are provided showing the accuracy of force estimates based on actuation force measurements and control capabilities.
An overview of the design requirements, manipulator construction, analysis and experimental results are provided for a CTM used as a tool manipulator in a traditional colonoscope. Currently, tools used in colonoscopic procedures are straight and exit the front of the scope with 1 DOF of operation (jaws of a grasper, tightening of a loop, etc.). This research shows that with a CTM deployed, the dexterity of these tools can be increased dramatically, increasing accuracy of tool operation, ease of use and safety of the overall procedure. The prototype investigated in this work allows for multiple tools to be used during a single procedure. Experimental results show the feasibility and advantages of the newly-designed manipulators
A spherical joint robotic end-effector for the Expanded Endoscopic Endonasal Approach
The endonasal transsphenoidal approach allows surgeons to access the pituitary gland through the natural orifice of the nose. Recently, surgeons have also described an Expanded Endoscopic Endonasal Approach (EEEA) for the treatment of other tumours around the base of the brain. However, operating in this way with nonarticulated tools is technically very difficult and not widely adopted. The goal of this study is to develop an articulated end-effector for a novel handheld robotic tool for the EEEA. We present a design and implementation of a 3.6mm diameter, three degrees-of-freedom, tendon-driven robotic end-effector that, contrary to rigid instruments which operate under fulcrum, will give the surgeon the ability to reach areas on the surface of the brain that were previously inaccessible. We model the end-effector kinematics in simulation to study the theoretical workspace it can achieve prior to implementing a test-bench device to validate the efficacy of the end-effector. We find promising repeatability of the proposed robotic end-effector of 0.42mm with an effective workspace with limits of ±30∘, which is greater than conventional neurosurgical tools. Additionally, although the tool’s end-effector has a small enough diameter to operate through the narrow nasal access path and the constrained workspace of EEEA, it showcased promising structural integrity and was able to support approximately a 6N load, despite a large deflection angle the limiting of which is scope of future work. These preliminary results indicate the end-effector is a promising first step towards developing appropriate handheld robotic instrumentation to drive EEEA adoption
Visual servoing of a robotic endoscope holder based on surgical instrument tracking
International audienceWe propose an image-based control for a roboticendoscope holder during laparoscopic surgery. Our aim is toprovide more comfort to the practitioner during surgery byautomatically positioning the endoscope at his request. To doso, we propose to maintain one or more instruments roughly atthe center of the laparoscopic image through different commandmodes. The originality of this method relies on the direct useof the endoscopic image and the absence of artificial markersadded to the instruments. The application is validated on a testbench with a commercial robotic endoscope holder
An adaptive and fully automatic method for estimating the 3D position of bendable instruments using endoscopic images
Background. Flexible bendable instruments are key tools for performing
surgical endoscopy. Being able to measure the 3D position of such instruments
can be useful for various tasks, such as controlling automatically robotized
instruments and analyzing motions. Methods. We propose an automatic method to
infer the 3D pose of a single bending section instrument, using only the images
provided by a monocular camera embedded at the tip of the endoscope. The
proposed method relies on colored markers attached onto the bending section.
The image of the instrument is segmented using a graph-based method and the
corners of the markers are extracted by detecting the color transition along
B{\'e}zier curves fitted on edge points. These features are accurately located
and then used to estimate the 3D pose of the instrument using an adaptive model
that allows to take into account the mechanical play between the instrument and
its housing channel. Results. The feature extraction method provides good
localization of markers corners with images of in vivo environment despite
sensor saturation due to strong lighting. The RMS error on the estimation of
the tip position of the instrument for laboratory experiments was 2.1, 1.96,
3.18 mm in the x, y and z directions respectively. Qualitative analysis in the
case of in vivo images shows the ability to correctly estimate the 3D position
of the instrument tip during real motions. Conclusions. The proposed method
provides an automatic and accurate estimation of the 3D position of the tip of
a bendable instrument in realistic conditions, where standard approaches fail.Comment: The International Journal of Medical Robotics and Computer Assisted
Surgery, John Wiley & Sons, Inc., 201
- …