6,398 research outputs found
Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources
Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset
Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease
We propose a new iterative segmentation model which can be accurately learned
from a small dataset. A common approach is to train a model to directly segment
an image, requiring a large collection of manually annotated images to capture
the anatomical variability in a cohort. In contrast, we develop a segmentation
model that recursively evolves a segmentation in several steps, and implement
it as a recurrent neural network. We learn model parameters by optimizing the
interme- diate steps of the evolution in addition to the final segmentation. To
this end, we train our segmentation propagation model by presenting incom-
plete and/or inaccurate input segmentations paired with a recommended next
step. Our work aims to alleviate challenges in segmenting heart structures from
cardiac MRI for patients with congenital heart disease (CHD), which encompasses
a range of morphological deformations and topological changes. We demonstrate
the advantages of this approach on a dataset of 20 images from CHD patients,
learning a model that accurately segments individual heart chambers and great
vessels. Com- pared to direct segmentation, the iterative method yields more
accurate segmentation for patients with the most severe CHD malformations.Comment: Presented at the Deep Learning in Medical Image Analysis Workshop,
MICCAI 201
- …