434 research outputs found

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Design of a Teleoperated Robotic Bronchoscopy System for Peripheral Pulmonary Lesion Biopsy

    Full text link
    Bronchoscopy with transbronchial biopsy is a minimally invasive and effective method for early lung cancer intervention. Robot-assisted bronchoscopy offers improved precision, spatial flexibility, and reduced risk of cross-infection. This paper introduces a novel teleoperated robotic bronchoscopy system and a three-stage procedure designed for robot-assisted bronchoscopy. The robotic mechanism enables a clinical practice similar to traditional bronchoscopy, augmented by the control of a novel variable stiffness catheter for tissue sampling. A rapid prototype of the robotic system has been fully developed and validated through in-vivo experiments. The results demonstrate the potential of the proposed robotic bronchoscopy system and variable stiffness catheter in enhancing accuracy and safety during bronchoscopy procedures

    Real-Time Quantitative Bronchoscopy

    Get PDF
    The determination of motion within a sequence of images remains one of the fundamental problems in computer vision after more than 30 years of research. Despite this work, there have been relatively few applications of these techniques to practical problems outside the fields of robotics and video encoding. In this paper, we present the continuing work to apply optical flow and egomotion recovery to the problem of measuring and navigating through the airway using a bronchoscope during a standard procedure, without the need for any additional data, localization systems or other external components. The current implementation uses a number of techniques to provide a range of numerical measurements and estimations to physicians in real time, using standard computer hardware

    Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration

    No full text
    Objective: The use of patient-specific models for surgical simulation requires photorealistic rendering of 3D structure and surface properties. For bronchoscope simulation, this requires augmenting virtual bronchoscope views generated from 3D tomographic data with patient-specific bronchoscope videos. To facilitate matching of video images to the geometry extracted from 3D tomographic data, this paper presents a new pq-space-based 2D/3D registration method for camera pose estimation in bronchoscope tracking. Methods: The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear local shape-from-shading algorithm derived from the unique camera/lighting constraints of the endoscopes. The resultant pq-vectors are then matched to those of the 3D model by differentiation of the z-buffer. A similarity measure based on angular deviations of the pq-vectors is used to provide a robust 2D/3D registration framework. Localization of tissue deformation is considered by assessing the temporal variation of the pq-vectors between subsequent frames. Results: The accuracy of the proposed method was assessed by using an electromagnetic tracker and a specially constructed airway phantom. Preliminary in vivo validation of the proposed method was performed on a matched patient bronchoscope video sequence and 3D CT data. Comparison to existing intensity-based techniques was also made. Conclusion: The proposed method does not involve explicit feature extraction and is relatively immune to illumination changes. The temporal variation of the pq distribution also permits the identification of localized deformation, which offers an effective way of excluding such areas from the registration process

    BronchoTrack: Airway Lumen Tracking for Branch-Level Bronchoscopic Localization

    Full text link
    Localizing the bronchoscope in real time is essential for ensuring intervention quality. However, most existing methods struggle to balance between speed and generalization. To address these challenges, we present BronchoTrack, an innovative real-time framework for accurate branch-level localization, encompassing lumen detection, tracking, and airway association.To achieve real-time performance, we employ a benchmark lightweight detector for efficient lumen detection. We are the first to introduce multi-object tracking to bronchoscopic localization, mitigating temporal confusion in lumen identification caused by rapid bronchoscope movement and complex airway structures. To ensure generalization across patient cases, we propose a training-free detection-airway association method based on a semantic airway graph that encodes the hierarchy of bronchial tree structures.Experiments on nine patient datasets demonstrate BronchoTrack's localization accuracy of 85.64 \%, while accessing up to the 4th generation of airways.Furthermore, we tested BronchoTrack in an in-vivo animal study using a porcine model, where it successfully localized the bronchoscope into the 8th generation airway.Experimental evaluation underscores BronchoTrack's real-time performance in both satisfying accuracy and generalization, demonstrating its potential for clinical applications

    Visual Odometry for Quantitative Bronchoscopy Using Optical Flow

    Get PDF
    Optical Flow, the extraction of motion from a sequence of images or a video stream, has been extensively researched since the late 1970s, but has been applied to the solution of few practical problems. To date, the main applications have been within fields such as robotics, motion compensation in video, and 3D reconstruction. In this paper we present the initial stages of a project to extract valuable information on the size and structure of the lungs using only the visual information provided by a bronchoscope during a typical procedure. The initial implementation provides a realtime estimation of the motion of the bronchoscope through the patients airway, as well as a simple means for the estimation of the cross sectional area of the airway

    Observation-driven adaptive differential evolution and its application to accurate and smooth bronchoscope three-dimensional motion tracking

    Full text link
    © 2015 Elsevier B.V. This paper proposes an observation-driven adaptive differential evolution algorithm that fuses bronchoscopic video sequences, electromagnetic sensor measurements, and computed tomography images for accurate and smooth bronchoscope three-dimensional motion tracking. Currently an electromagnetic tracker with a position sensor fixed at the bronchoscope tip is commonly used to estimate bronchoscope movements. The large tracking error from directly using sensor measurements, which may be deteriorated heavily by patient respiratory motion and the magnetic field distortion of the tracker, limits clinical applications. How to effectively use sensor measurements for precise and stable bronchoscope electromagnetic tracking remains challenging. We here exploit an observation-driven adaptive differential evolution framework to address such a challenge and boost the tracking accuracy and smoothness. In our framework, two advantageous points are distinguished from other adaptive differential evolution methods: (1) the current observation including sensor measurements and bronchoscopic video images is used in the mutation equation and the fitness computation, respectively and (2) the mutation factor and the crossover rate are determined adaptively on the basis of the current image observation. The experimental results demonstrate that our framework provides much more accurate and smooth bronchoscope tracking than the state-of-the-art methods. Our approach reduces the tracking error from 3.96 to 2.89. mm, improves the tracking smoothness from 4.08 to 1.62. mm, and increases the visual quality from 0.707 to 0.741
    • …
    corecore