963 research outputs found
Methods and Tools for Objective Assessment of Psychomotor Skills in Laparoscopic Surgery
Training and assessment paradigms for laparoscopic surgical skills are evolving from traditional mentor–trainee tutorship towards structured, more objective and safer programs. Accreditation of surgeons requires reaching a consensus on metrics and tasks used to assess surgeons’ psychomotor skills. Ongoing development of tracking systems and software solutions has allowed for the expansion of novel training and assessment means in laparoscopy. The current challenge is to adapt and include these systems within training programs, and to exploit their possibilities for evaluation purposes. This paper describes the state of the art in research on measuring and assessing psychomotor laparoscopic skills. It gives an overview on tracking systems as well as on metrics and advanced statistical and machine learning techniques employed for evaluation purposes. The later ones have a potential to be used as an aid in deciding on the surgical competence level, which is an important aspect when accreditation of the surgeons in particular, and patient safety in general, are considered. The prospective of these methods and tools make them complementary means for surgical assessment of motor skills, especially in the early stages of training. Successful examples such as the Fundamentals of Laparoscopic Surgery should help drive a paradigm change to structured curricula based on objective parameters. These may improve the accreditation of new surgeons, as well as optimize their already overloaded training schedules
Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery
Intraoperative segmentation and tracking of minimally invasive instruments is
a prerequisite for computer- and robotic-assisted surgery. Since additional
hardware like tracking systems or the robot encoders are cumbersome and lack
accuracy, surgical vision is evolving as promising techniques to segment and
track the instruments using only the endoscopic images. However, what is
missing so far are common image data sets for consistent evaluation and
benchmarking of algorithms against each other. The paper presents a comparative
validation study of different vision-based methods for instrument segmentation
and tracking in the context of robotic as well as conventional laparoscopic
surgery. The contribution of the paper is twofold: we introduce a comprehensive
validation data set that was provided to the study participants and present the
results of the comparative validation study. Based on the results of the
validation study, we arrive at the conclusion that modern deep learning
approaches outperform other methods in instrument segmentation tasks, but the
results are still not perfect. Furthermore, we show that merging results from
different methods actually significantly increases accuracy in comparison to
the best stand-alone method. On the other hand, the results of the instrument
tracking task show that this is still an open challenge, especially during
challenging scenarios in conventional laparoscopic surgery
ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools
Real-time tool segmentation from endoscopic videos is an essential part of
many computer-assisted robotic surgical systems and of critical importance in
robotic surgical data science. We propose two novel deep learning architectures
for automatic segmentation of non-rigid surgical instruments. Both methods take
advantage of automated deep-learning-based multi-scale feature extraction while
trying to maintain an accurate segmentation quality at all resolutions. The two
proposed methods encode the multi-scale constraint inside the network
architecture. The first proposed architecture enforces it by cascaded
aggregation of predictions and the second proposed network does it by means of
a holistically-nested architecture where the loss at each scale is taken into
account for the optimization process. As the proposed methods are for real-time
semantic labeling, both present a reduced number of parameters. We propose the
use of parametric rectified linear units for semantic labeling in these small
architectures to increase the regularization ability of the design and maintain
the segmentation accuracy without overfitting the training sets. We compare the
proposed architectures against state-of-the-art fully convolutional networks.
We validate our methods using existing benchmark datasets, including ex vivo
cases with phantom tissue and different robotic surgical instruments present in
the scene. Our results show a statistically significant improved Dice
Similarity Coefficient over previous instrument segmentation methods. We
analyze our design choices and discuss the key drivers for improving accuracy.Comment: Paper accepted at IROS 201
Artificial intelligence and automation in endoscopy and surgery
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient’s anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery
2017 Robotic Instrument Segmentation Challenge
In mainstream computer vision and machine learning, public datasets such as
ImageNet, COCO and KITTI have helped drive enormous improvements by enabling
researchers to understand the strengths and limitations of different algorithms
via performance comparison. However, this type of approach has had limited
translation to problems in robotic assisted surgery as this field has never
established the same level of common datasets and benchmarking methods. In 2015
a sub-challenge was introduced at the EndoVis workshop where a set of robotic
images were provided with automatically generated annotations from robot
forward kinematics. However, there were issues with this dataset due to the
limited background variation, lack of complex motion and inaccuracies in the
annotation. In this work we present the results of the 2017 challenge on
robotic instrument segmentation which involved 10 teams participating in
binary, parts and type based segmentation of articulated da Vinci robotic
instruments
Concurrent Segmentation and Localization for Tracking of Surgical Instruments
Real-time instrument tracking is a crucial requirement for various
computer-assisted interventions. In order to overcome problems such as specular
reflections and motion blur, we propose a novel method that takes advantage of
the interdependency between localization and segmentation of the surgical tool.
In particular, we reformulate the 2D instrument pose estimation as heatmap
regression and thereby enable a concurrent, robust and near real-time
regression of both tasks via deep learning. As demonstrated by our experimental
results, this modeling leads to a significantly improved performance than
directly regressing the tool position and allows our method to outperform the
state of the art on a Retinal Microsurgery benchmark and the MICCAI EndoVis
Challenge 2015.Comment: I. Laina and N. Rieke contributed equally to this work. Accepted to
MICCAI 201
Vision-based and marker-less surgical tool detection and tracking: a review of the literature
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: “surgical tool detection”, “surgical tool tracking”, “surgical instrument detection” and “surgical instrument tracking” limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement
- …