17 research outputs found
Automatic Workflow for Narrow-Band Laryngeal Video Stitching
In narrow band (NB) laryngeal endoscopy, the clinician usually positions the endoscope near the tissue for a correct inspection of possible vascular pattern alterations, indicative of laryngeal malignancies. The video is usually reviewed many times to refine the diagnosis, resulting in loss of time since the salient frames of the video are mixed with blurred, noisy, and redundant frames caused by the endoscope movements. The aim of this work is to provide to the clinician a unique larynx panorama, obtained through an automatic frame selection strategy to discard non-informative frames. Anisotropic diffusion filtering was exploited to lower the noise level while encouraging the selection of meaningful image features, and a feature-based stitching approach was carried out to generate the panorama. The frame selection strategy, tested on on six pathological NB endoscopic videos, was compared with standard strategies, as uniform and random sampling, showing higher performance of the subsequent stitching procedure, both visually, in terms of vascular structure preservation, and numerically, through a blur estimation metric
Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery
Purpose: Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon’s maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process.
Methods: Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images.
Results: The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43% and (2) does not affect the accuracy of the 3D reconstructions significantly.
Conclusion: Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation
Long Term Safety Area Tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery
partially_open6Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case.openPenza, Veronica; Du, Xiaofei; Stoyanov, Danail; Forgione, Antonello; Mattos, Leonardo S; De Momi, ElenaPenza, Veronica; Du, Xiaofei; Stoyanov, DANAIL VALENTINOV; Forgione, Antonello; Mattos, Leonardo S; De Momi, Elen
A novel affordable user interface for robotic surgery training: design, development and usability study
IntroductionThe use of robotic systems in the surgical domain has become groundbreaking for patients and surgeons in the last decades. While the annual number of robotic surgical procedures continues to increase rapidly, it is essential to provide the surgeon with innovative training courses along with the standard specialization path. To this end, simulators play a fundamental role. Currently, the high cost of the leading VR simulators limits their accessibility to educational institutions. The challenge lies in balancing high-fidelity simulation with cost-effectiveness; however, few cost-effective options exist for robotic surgery training.MethodsThis paper proposes the design, development and user-centered usability study of an affordable user interface to control a surgical robot simulator. It consists of a cart equipped with two haptic interfaces, a VR visor and two pedals. The simulations were created using Unity, which offers versatility for expanding the simulator to more complex scenes. An intuitive teleoperation control of the simulated robotic instruments is achieved through a high-level control strategy.Results and DiscussionIts affordability and resemblance to real surgeon consoles make it ideal for implementing robotic surgery training programs in medical schools, enhancing accessibility to a broader audience. This is demonstrated by the results of an usability study involving expert surgeons who use surgical robots regularly, expert surgeons without robotic surgery experience, and a control group. The results of the study, which was based on a traditional Peg-board exercise and Camera Control task, demonstrate the simulator’s high usability and intuitive control across diverse user groups, including those with limited experience. This offers evidence that this affordable system is a promising solution for expanding robotic surgery training
EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding
In abdominal surgery, intraoperative bleeding is one of the major complications that affect the outcome of minimally invasive surgical procedures. One of the causes is attributed to accidental damages to arteries or veins, and one of the possible risk factors falls on the surgeon’s skills. This paper presents the development and application of an Enhanced Vision System for Robotic Surgery (EnViSoRS), based on a user-defined Safety Volume (SV) tracking to minimize the risk of intraoperative bleeding. It aims at enhancing the surgeon’s capabilities by providing Augmented Reality (AR) assistance toward the protection of vessels from injury during the execution of surgical procedures with a robot. The core of the framework consists in (i) a hybrid tracking algorithm (LT-SAT tracker) that robustly follows a user-defined Safety Area (SA) in long term; (ii) a dense soft tissue 3D reconstruction algorithm, necessary for the computation of the SV; (iii) AR features for visualization of the SV to be protected and of a graphical gage indicating the current distance between the instruments and the reconstructed surface. EnViSoRS was integrated with a commercial robotic surgical system (the dVRK system) for testing and validation. The experiments aimed at demonstrating the accuracy, robustness, performance, and usability of EnViSoRS during the execution of a simulated surgical task on a liver phantom. Results show an overall accuracy in accordance with surgical requirements (<5 mm), and high robustness in the computation of the SV in terms of precision and recall of its identification. The optimization strategy implemented to speed up the computational time is also described and evaluated, providing AR features update rate up to 4 fps, without impacting the real-time visualization of the stereo endoscopic video. Finally, qualitative results regarding the system usability indicate that the proposed system integrates well with the commercial surgical robot and has indeed potential to offer useful assistance during real surgeries
Dual Robot Collaborative System for Autonomous Venous Access Based on Ultrasound and Bioimpedance Sensing Technology
Accurate needle insertion is an important task in many medical procedures. This paper studies the case of an autonomous needle insertion system for central venous access, which is a risky and challenging procedure involving the simultaneous manipulation of an ultrasound probe and of a catheterization needle. The goal of this medical operation is to provide access to a deep central vein, which is a key step in cardiovascular treatments or for the administration of drugs and treatments for cancer or infections. Accordingly, in this work we propose an autonomous dual-arm system for central venous access. The system is composed of two Franka robotic arms that are precisely co-registered and collaborate to achieve accurate needle insertion by combining ultrasound and bioimpedance sensing to ensure robust deep vessels visualization and venipuncture detection. The proposed system performance is evaluated on a phantom trainer through experiments simulating the jugular vein access for cardiac catheterization purposes. Quantitative results show the system is able to autonomously scan the area of interest, localize the vein and perform autonomous needle insertion with high accuracy and placement error below 1.7mm, proving the potential of the technology for real clinical use.</p
EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms
Background: 3D reconstruction algorithms are of fundamental importance for augmented reality applications in computer-assisted surgery. However, few datasets of endoscopic stereo images with associated 3D surface references are currently openly available, preventing the proper validation of such algorithms. This work presents a new and rich dataset of endoscopic stereo images (EndoAbS dataset). Methods: The dataset includes (i) endoscopic stereo images of phantom abdominal organs, (ii) a 3D organ surface reference (RF) generated with a laser scanner and (iii) camera calibration parameters. A detailed description of the generation of the phantom and the camera–laser calibration method is also provided. Results: An estimation of the overall error in creation of the dataset is reported (camera–laser calibration error 0.43 mm) and the performance of a 3D reconstruction algorithm is evaluated using EndoAbS, resulting in an accuracy error in accordance with state-of-the-art results (<2 mm). Conclusions: The EndoAbS dataset contributes to an increase the number and variety of openly available datasets of surgical stereo images, including a highly accurate RF and different surgical conditions
Vision-Guided Autonomous Robotic Electrical Bio-Impedance Scanning System for Abnormal Tissue Detection
In cancer surgery, the correct localization of cancerous tissue is critical for the success of the surgery. Specifically, the surgical goal is to remove the malignant tissue completely while preserving the maximum amount of healthy tissue. Unfortunately, limited technologies are available for intra-operative assessment of tumor margin in Robotic Minimally Invasive Surgery (RMIS). This work presents a vision-based autonomous robotic scanning system for abnormal tissue detection. A stereo vision system, including dense 3D surface reconstruction and tissue tracking, is used to control the scanning motion to compensate for tissue motion and deformation. A high level control strategy allows to control a robot arm to scan autonomously the tissue by means of a monopolar forceps measuring the Electric Bio-Impedance (EBI) of the tissue. The acquired information is used for discriminating cancerous from healthy tissue in real-time, and this information is presented as an impedance map overlaid on the endoscopic video using Augmented Reality (AR). The proposed system performance is evaluated on ex-vivo animal tissues, through experiments simulating the identification and localization of hepatic cancer for resection surgery. Quantitative results show that the system is able to autonomously scan the area of interest and provide a tissue characterization with an accuracy over 82%.</p