1,301 research outputs found
2017 Robotic Instrument Segmentation Challenge
In mainstream computer vision and machine learning, public datasets such as
ImageNet, COCO and KITTI have helped drive enormous improvements by enabling
researchers to understand the strengths and limitations of different algorithms
via performance comparison. However, this type of approach has had limited
translation to problems in robotic assisted surgery as this field has never
established the same level of common datasets and benchmarking methods. In 2015
a sub-challenge was introduced at the EndoVis workshop where a set of robotic
images were provided with automatically generated annotations from robot
forward kinematics. However, there were issues with this dataset due to the
limited background variation, lack of complex motion and inaccuracies in the
annotation. In this work we present the results of the 2017 challenge on
robotic instrument segmentation which involved 10 teams participating in
binary, parts and type based segmentation of articulated da Vinci robotic
instruments
ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools
Real-time tool segmentation from endoscopic videos is an essential part of
many computer-assisted robotic surgical systems and of critical importance in
robotic surgical data science. We propose two novel deep learning architectures
for automatic segmentation of non-rigid surgical instruments. Both methods take
advantage of automated deep-learning-based multi-scale feature extraction while
trying to maintain an accurate segmentation quality at all resolutions. The two
proposed methods encode the multi-scale constraint inside the network
architecture. The first proposed architecture enforces it by cascaded
aggregation of predictions and the second proposed network does it by means of
a holistically-nested architecture where the loss at each scale is taken into
account for the optimization process. As the proposed methods are for real-time
semantic labeling, both present a reduced number of parameters. We propose the
use of parametric rectified linear units for semantic labeling in these small
architectures to increase the regularization ability of the design and maintain
the segmentation accuracy without overfitting the training sets. We compare the
proposed architectures against state-of-the-art fully convolutional networks.
We validate our methods using existing benchmark datasets, including ex vivo
cases with phantom tissue and different robotic surgical instruments present in
the scene. Our results show a statistically significant improved Dice
Similarity Coefficient over previous instrument segmentation methods. We
analyze our design choices and discuss the key drivers for improving accuracy.Comment: Paper accepted at IROS 201
Real-time 3D Tracking of Articulated Tools for Robotic Surgery
In robotic surgery, tool tracking is important for providing safe tool-tissue
interaction and facilitating surgical skills assessment. Despite recent
advances in tool tracking, existing approaches are faced with major
difficulties in real-time tracking of articulated tools. Most algorithms are
tailored for offline processing with pre-recorded videos. In this paper, we
propose a real-time 3D tracking method for articulated tools in robotic
surgery. The proposed method is based on the CAD model of the tools as well as
robot kinematics to generate online part-based templates for efficient 2D
matching and 3D pose estimation. A robust verification approach is incorporated
to reject outliers in 2D detections, which is then followed by fusing inliers
with robot kinematic readings for 3D pose estimation of the tool. The proposed
method has been validated with phantom data, as well as ex vivo and in vivo
experiments. The results derived clearly demonstrate the performance advantage
of the proposed method when compared to the state-of-the-art.Comment: This paper was presented in MICCAI 2016 conference, and a DOI was
linked to the publisher's versio
Visual-Kinematics Graph Learning for Procedure-agnostic Instrument Tip Segmentation in Robotic Surgeries
Accurate segmentation of surgical instrument tip is an important task for
enabling downstream applications in robotic surgery, such as surgical skill
assessment, tool-tissue interaction and deformation modeling, as well as
surgical autonomy. However, this task is very challenging due to the small
sizes of surgical instrument tips, and significant variance of surgical scenes
across different procedures. Although much effort has been made on visual-based
methods, existing segmentation models still suffer from low robustness thus not
usable in practice. Fortunately, kinematics data from the robotic system can
provide reliable prior for instrument location, which is consistent regardless
of different surgery types. To make use of such multi-modal information, we
propose a novel visual-kinematics graph learning framework to accurately
segment the instrument tip given various surgical procedures. Specifically, a
graph learning framework is proposed to encode relational features of
instrument parts from both image and kinematics. Next, a cross-modal
contrastive loss is designed to incorporate robust geometric prior from
kinematics to image for tip segmentation. We have conducted experiments on a
private paired visual-kinematics dataset including multiple procedures, i.e.,
prostatectomy, total mesorectal excision, fundoplication and distal gastrectomy
on cadaver, and distal gastrectomy on porcine. The leave-one-procedure-out
cross validation demonstrated that our proposed multi-modal segmentation method
significantly outperformed current image-based state-of-the-art approaches,
exceeding averagely 11.2% on Dice.Comment: Accepted to IROS 202
Automated pick-up of suturing needles for robotic surgical assistance
Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate
cancer that involves complete or nerve sparing removal prostate tissue that
contains cancer. After removal the bladder neck is successively sutured
directly with the urethra. The procedure is called urethrovesical anastomosis
and is one of the most dexterity demanding tasks during RALP. Two suturing
instruments and a pair of needles are used in combination to perform a running
stitch during urethrovesical anastomosis. While robotic instruments provide
enhanced dexterity to perform the anastomosis, it is still highly challenging
and difficult to learn. In this paper, we presents a vision-guided needle
grasping method for automatically grasping the needle that has been inserted
into the patient prior to anastomosis. We aim to automatically grasp the
suturing needle in a position that avoids hand-offs and immediately enables the
start of suturing. The full grasping process can be broken down into: a needle
detection algorithm; an approach phase where the surgical tool moves closer to
the needle based on visual feedback; and a grasping phase through path planning
based on observed surgical practice. Our experimental results show examples of
successful autonomous grasping that has the potential to simplify and decrease
the operational time in RALP by assisting a small component of urethrovesical
anastomosis
Gesture Recognition in Robotic Surgery: a Review
OBJECTIVE: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE: This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field
Surgical Tool Segmentation with Pose-Informed Morphological Polar Transform of Endoscopic Images
This paper presents a tool-pose-informed variable center morphological polar transform to enhance segmentation of endoscopic images. The representation, while not loss-less, transforms rigid tool shapes into morphologies consistently more rectangular that may be more amenable to image segmentation networks. The proposed method was evaluated using the U-Net convolutional neural network, and the input images from endoscopy were represented in one of the four different coordinate formats (1) the original rectangular image representation, (2) the morphological polar coordinate transform, (3) the proposed variable center transform about the tool-tip pixel and (4) the proposed variable center transform about the tool vanishing point pixel. Previous work relied on the observations that endoscopic images typically exhibit unused border regions with content in the shape of a circle (since the image sensor is designed to be larger than the image circle to maximize available visual information in the constrained environment) and that the region of interest (ROI) was most ideally near the endoscopic image center. That work sought an intelligent method for, given an input image, carefully selecting between methods (1) and (2) for best image segmentation prediction. In this extension, the image center reference constraint for polar transformation in method (2) is relaxed via the development of a variable center morphological transformation. Transform center selection leads to different spatial distributions of image loss, and the transform-center location can be informed by robot kinematic model and endoscopic image data. In particular, this work is examined using the tool-tip and tool vanishing point on the image plane as candidate centers. The experiments were conducted for each of the four image representations using a data set of 8360 endoscopic images from real sinus surgery. The segmentation performance was evaluated with standard metrics, and some insight about loss and tool location effects on performance are provided. Overall, the results are promising, showing that selecting a transform center based on tool shape features using the proposed method can improve segmentation performance
Combining Differential Kinematics and Optical Flow for Automatic Labeling of Continuum Robots in Minimally Invasive Surgery
International audienceThe segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques
- …