6,440 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools

    Get PDF
    Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science. We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments. Both methods take advantage of automated deep-learning-based multi-scale feature extraction while trying to maintain an accurate segmentation quality at all resolutions. The two proposed methods encode the multi-scale constraint inside the network architecture. The first proposed architecture enforces it by cascaded aggregation of predictions and the second proposed network does it by means of a holistically-nested architecture where the loss at each scale is taken into account for the optimization process. As the proposed methods are for real-time semantic labeling, both present a reduced number of parameters. We propose the use of parametric rectified linear units for semantic labeling in these small architectures to increase the regularization ability of the design and maintain the segmentation accuracy without overfitting the training sets. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets, including ex vivo cases with phantom tissue and different robotic surgical instruments present in the scene. Our results show a statistically significant improved Dice Similarity Coefficient over previous instrument segmentation methods. We analyze our design choices and discuss the key drivers for improving accuracy.Comment: Paper accepted at IROS 201

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery

    Get PDF
    Intraoperative segmentation and tracking of minimally invasive instruments is a prerequisite for computer- and robotic-assisted surgery. Since additional hardware like tracking systems or the robot encoders are cumbersome and lack accuracy, surgical vision is evolving as promising techniques to segment and track the instruments using only the endoscopic images. However, what is missing so far are common image data sets for consistent evaluation and benchmarking of algorithms against each other. The paper presents a comparative validation study of different vision-based methods for instrument segmentation and tracking in the context of robotic as well as conventional laparoscopic surgery. The contribution of the paper is twofold: we introduce a comprehensive validation data set that was provided to the study participants and present the results of the comparative validation study. Based on the results of the validation study, we arrive at the conclusion that modern deep learning approaches outperform other methods in instrument segmentation tasks, but the results are still not perfect. Furthermore, we show that merging results from different methods actually significantly increases accuracy in comparison to the best stand-alone method. On the other hand, the results of the instrument tracking task show that this is still an open challenge, especially during challenging scenarios in conventional laparoscopic surgery

    Laparoscopic Video Analysis for Training and Image Guided Surgery

    Get PDF
    Automatic analysis of Minimally Invasive Surgical video has the potential to drive new solutions for alleviating needs of safe and reproducible training programs, objective and transparent evaluation systems and navigation tools to assist surgeons and improve patient safety. Surgical video is an always available source of information, which can be used without any additional intrusive hardware in the operating room. This paper is focused on surgical video analysis methods and techniques. It describes authors' contributions in two key aspects, the 3D reconstruction of the surgical field and the segmentation and tracking of tools and organs based on laparoscopic video images. Results are given to illustrate the potential of this field of research, like the calculi of the 3D position and orientation of a tool from its 2D image, or the translation of a preoperative resection plan into a hepatectomy surgical procedure using the shading information of the image. Research efforts are required to further develop these technologies in order to harness all the valuable information available in any video-based surgery

    Tactile Sensing System for Lung Tumour Localization during Minimally Invasive Surgery

    Get PDF
    Video-assisted thoracoscopie surgery (VATS) is becoming a prevalent method for lung cancer treatment. However, VATS suffers from the inability to accurately relay haptic information to the surgeon, often making tumour localization difficult. This limitation was addressed by the design of a tactile sensing system (TSS) consisting of a probe with a tactile sensor and interfacing visualization software. In this thesis, TSS performance was tested to determine the feasibility of implementing the system in VATS. This was accomplished through a series of ex vivo experiments in which the tactile sensor was calibrated and the visualization software was modified to provide haptic information visually to the user, and TSS performance was compared using human and robot palpation methods, and conventional VATS instruments. It was concluded that the device offers the possibility of providing to the surgeon the haptic information lost during surgery, thereby mitigating one of the current limitations of VATS

    Image-Fusion for Biopsy, Intervention, and Surgical Navigation in Urology

    Get PDF
    • …
    corecore