215 research outputs found

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    Full text link
    © 2015 American Association of Physicists in Medicine. Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensors) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods

    Image Registration to Map Endoscopic Video to Computed Tomography for Head and Neck Radiotherapy Patients

    Get PDF
    The purpose of this work was to explore the feasibility of registering endoscopic video to radiotherapy treatment plans for patients with head and neck cancer without physical tracking of the endoscope during the examination. Endoscopy-CT registration would provide a clinical tool that could be used to enhance the treatment planning process and would allow for new methods to study the incidence of radiation-related toxicity. Endoscopic video frames were registered to CT by optimizing virtual endoscope placement to maximize the similarity between the frame and the virtual image. Virtual endoscopic images were rendered using a polygonal mesh created by segmenting the airways of the head and neck with a density threshold. The optical properties of the virtual endoscope were matched to a calibrated model of the real endoscope. A novel registration algorithm was developed that takes advantage of physical constraints on the endoscope to effectively search the airways of the head and neck for the desired virtual endoscope coordinates. This algorithm was tested on rigid phantoms with embedded point markers and protruding bolus material. In these tests, the median registration accuracy was 3.0 mm for point measurements and 3.5 mm for surface measurements. The algorithm was also tested on four endoscopic examinations of three patients, in which it achieved a median registration accuracy of 9.9 mm. The uncertainties caused by the non-rigid anatomy of the head and neck and differences in patient positioning between endoscopic examinations and CT scans were examined by taking repeated measurements after placing the virtual endoscope in surface meshes created from different CT scans. Non-rigid anatomy introduced errors on the order of 1-3 mm. Patient positioning had a larger impact, introducing errors on the order of 3.5-4.5 mm. Endoscopy-CT registration in the head and neck is possible, but large registration errors were found in patients. The uncertainty analyses suggest a lower limit of 3-5 mm. Further development is required to achieve an accuracy suitable for clinical use

    Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy

    Get PDF
    In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules

    Endoscopic Motion Estimation using Video and CT

    Get PDF
    Functional Endoscopic Sinus Surgery (FESS) is a surgical procedure that otolaryngologists have adopted to treat sinus diseases. Aiming for accurate treatments and less complications, surgeons are usually guided with an endoscopic navigation system when performing the surgery. The state-of-the-art navigation systems report a submillimeter positioning error. This significantly reduces intraopertive time and improves surgical outcomes. Navigating endoscope is similar to Visual Odometry (VO) or Simultaneous Localization and Mapping (SLAM), all of which require an estimation of camera poses and motions in an unknown environment. Feature-based methods and direct methods are two common approaches for VO and Visual SLAM for motion estimation, but both methods have drawbacks. Feature computation and feature extraction consume are usually not computationally effective, while direct methods suffer from local optima. One recent alternative is called Semi-Direct Method, or hybrid method, which overcomes the drawbacks by applying optimization that is used in direct method to the selected features. In this work, we introduce a novel endoscopic navigation system for FESS which uses both prescanned CT model and 2D endoscope video. The system is able to texture map the CT model in real time for visualization and refine the pose estimation of the endoscope from different prior estimates

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
    • …
    corecore