209 research outputs found
Label-based Optimization of Dense Disparity Estimation for Robotic Single Incision Abdominal Surgery
Minimally invasive surgical techniques have led to novel approaches such as Single Incision Laparoscopic Surgery (SILS), which allows the reduction of post-operative infections and patient recovery time, improving surgical outcomes. However, the new techniques pose also new challenges to surgeons: during SILS, visualization of the surgical field is limited by the endoscope field of view, and the access to the target area is limited by the fact that instruments have to be inserted through a single port.
In this context, intra-operative navigation and augmented reality based on pre-operative images have the potential to enhance SILS procedures by providing the information necessary to increase the intervention accuracy and safety. Problems arise when structures of interest change their pose or deform with respect to pre-operative planning, as usually happens in soft tissue abdominal surgery. This requires online estimation of the deformations to correct the pre-operative plan, which can be done, for example, through methods of depth estimation from stereo endoscopic images (3D reconstruction). The denser the reconstruction, the more accurate the deformation identification can be.
This work presents an algorithm for 3D reconstruction of soft tissue, focusing on the refinement of the disparity map in order to obtain an accurate and dense point map. This algorithm is part of an assistive system for intra-operative guidance and safety supervision for robotic abdominal SILS .
Results show that comparing our method with state-of-the-art CPU implementations, the percentage of valid pixel obtained with our method is 24% higher while providing comparable accuracy. Future research will focus on the development of a real-time implementation of the proposed algorithm, potentially based on a hybrid CPU-GPU processing framework
Virtual Assistive System for Robotic Single Incision Laparoscopic Surgery
Single Incision Laparoscopic Surgery (SILS) reduces
the trauma of large wounds decreasing the post-operative infections,
but introduces technical difficulties for the surgeon, who has
to deal with at least three instruments in a single incision. These
drawbacks can be overcome with the introduction of robotic
arms inside the abdominal cavity, but still remain difficulties in
the surgical field vision, limited by the endoscope field of view.
This work is aimed at developing a system to improve the information
required by the surgeon and enhance the vision during
a robotic SILS. In the pre-operative phase, the segmentation and
surface rendering of organs allow the surgeon to plan the surgery.
During the intra-operative phase, the run-time information (tools
and endoscope pose) and the pre-operative information (3D
models of organs) are combined in a virtual environment. A
point-based rigid registration of the virtual abdomen on the real
patient creates a connection between reality and virtuality. The
camera-image plane calibration allows to know at run-time the
pose of the endoscopic view.
The results show how using a small set of 4 points (the minimal
number of points that would be used in a real procedure) for the
camera-image plane calibration and for the registration between
real and virtual model of the abdomen, is enough to provide a
calibration/registration accuracy within the requirements
Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy
Objective: Surgical data science is evolving into a research field that aims
to observe everything occurring within and around the treatment process to
provide situation-aware data-driven assistance. In the context of endoscopic
video analysis, the accurate classification of organs in the field of view of
the camera proffers a technical challenge. Herein, we propose a new approach to
anatomical structure classification and image tagging that features an
intrinsic measure of confidence to estimate its own performance with high
reliability and which can be applied to both RGB and multispectral imaging (MI)
data. Methods: Organ recognition is performed using a superpixel classification
strategy based on textural and reflectance information. Classification
confidence is estimated by analyzing the dispersion of class probabilities.
Assessment of the proposed technology is performed through a comprehensive in
vivo study with seven pigs. Results: When applied to image tagging, mean
accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB)
and 96% (MI) with the confidence measure. Conclusion: Results showed that the
confidence measure had a significant influence on the classification accuracy,
and MI data are better suited for anatomical structure labeling than RGB data.
Significance: This work significantly enhances the state of art in automatic
labeling of endoscopic videos by introducing the use of the confidence metric,
and by being the first study to use MI data for in vivo laparoscopic tissue
classification. The data of our experiments will be released as the first in
vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table
- …