64 research outputs found

    Re-localisation of microscopic lesions in their macroscopic context for surgical instrument guidance

    Get PDF
    Optical biopsies interrogate microscopic structure in vivo with a 2mm diameter miniprobe placed in contact with the tissue for detection of lesions and assessment of disease progression. After detection, instruments are guided to the lesion location for a new optical interrogation, or for treatment, or for tissue excision during the same or a future examination. As the optical measurement can be considered as a point source of information at the surface of the tissue of interest, accurate guidance can be difficult. A method for re-localisation of the sampling point is, therefore, needed. The method presented in this thesis has been developed for biopsy site re-localisation during a surveillance examination of Barrett’s Oesophagus. The biopsy site, invisible macroscopically during conventional endoscopy, is re-localised in the target endoscopic image using epipolar lines derived from its locations given by the tip of the miniprobe visible in a series of reference endoscopic images. A confidence region can be drawn around the relocalised biopsy site from its uncertainty that is derived analytically. This thesis also presents a method to improve the accuracy of the epipolar lines derived for the biopsy site relocalisation using an electromagnetic tracking system. Simulations and tests on patient data identified the cases when the analytical uncertainty is a good approximation of the confidence region and showed that biopsy sites can be re-localised with accuracies better than 1mm. Studies on phantom and on porcine excised tissue demonstrated that an electromagnetic tracking system contributes to more accurate epipolar lines and re-localised biopsy sites for an endoscope displacement greater than 5mm. The re-localisation method can be applied to images acquired during different endoscopic examinations. It may also be useful for pulmonary applications. Finally, it can be combined with a Magnetic Resonance scanner which can steer cells to the biopsy site for tissue treatment

    Dense feature correspondence for video-based endoscope three-dimensional motion tracking

    Full text link
    This paper presents an improved video-based endoscope tracking approach on the basis of dense feature correspondence. Currently video-based methods often fail to track the endoscope motion due to low-quality endoscopic video images. To address such failure, we use image texture information to boost the tracking performance. A local image descriptor - DAISY is introduced to efficiently detect dense texture or feature information from endoscopic images. After these dense feature correspondence, we compute relative motion parameters between the previous and current endoscopic images in terms of epipolar geometric analysis. By initializing with the relative motion information, we perform 2-D/3-D or video-volume registration and determine the current endoscope pose information with six degrees of freedom (6DoF) position and orientation parameters. We evaluate our method on clinical datasets. Experimental results demonstrate that our proposed method outperforms state-of-the-art approaches. The tracking error was significantly reduced from 7.77 mm to 4.78 mm. © 2014 IEEE

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    A Testbed for Design and Performance Evaluation of Visual Localization Technique inside the Small Intestine

    Get PDF
    Wireless video capsule endoscopy (VCE) plays an increasingly important role in assisting clinical diagnoses of gastrointestinal (GI) diseases. It provides a non-invasive way to examine the entire small intestine, where other conventional endoscopic instruments can barely reach. Existing examination systems for the VCE cannot track the location of a endoscopic capsule, which prevents the physician from identifying the exact location of the diseases. During the eight hour examination time, the video capsule continuously keeps taking images at a frame rate up to six frame per sec, so it is possible to extract the motion information from the content of the image sequence. Many attempts have been made to develop computer vision algorithms to detect the motion of the capsule based on the small changes in the consecutive video frames and then trace the location of the capsule. However, validation of those algorithms has become a challenging topic because conducting experiments on the human body is extremely difficult due to individual differences and legal issues. In this thesis, two validation approaches for motion tracking of the VCE are presented in detail respectively. One approach is to build a physical testbed with a plastic pipe and an endoscopy camera; the other is to build a virtual testbed by creating a three-dimensional virtual small intestine model and simulating the motion of the capsule. Based on the virtual testbed, a physiological factor, intestinal contraction, has been studied in terms of its influence on visual based localization algorithm and a geometric model for measuring the amount of contraction is proposed and validated via the virtual testbed. Empirical results have made contributions in support of the performance evaluation of other research on the visual based localization algorithm of VCE

    Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy

    Get PDF
    In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
    • …
    corecore