404 research outputs found

    Semiautomated Skeletonization of the Pulmonary Arterial Tree in Micro-CT Images

    Get PDF
    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel\u27s axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Algorithm for Video Summarization of Bronchoscopy Procedures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders.</p> <p>Methods</p> <p>The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value.</p> <p>Results</p> <p>The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions.</p> <p>Conclusions</p> <p>The paper focuses on the challenge of generating summaries of bronchoscopy video recordings.</p

    Maximal Contrast Adaptive Region Growing for CT Airway Tree Segmentation

    Get PDF
    In this paper we propose a fully self-assessed adaptive region growing airway segmentation algorithm. We rely on a standardized and self-assessed region-based approach to deal with varying imaging conditions. Initialization of the algorithm requires prior knowledge of trachea location. This can be provided either by manual seeding or by automatic trachea detection in upper airway tree image slices. The detection of the optimal parameters is managed internally using a measure of the varying contrast of the growing region. Extensive validation is provided for a set of 20 chest CT scans. Our method exhibits very low leakage into the lung parenchyma, so even though the smaller airways are not obtained from the region growing, our fully automatic technique can provide robust and accurate initialization for other method

    AeroPath: An airway segmentation benchmark dataset with challenging pathology

    Full text link
    To improve the prognosis of patients suffering from pulmonary diseases, such as lung cancer, early diagnosis and treatment are crucial. The analysis of CT images is invaluable for diagnosis, whereas high quality segmentation of the airway tree are required for intervention planning and live guidance during bronchoscopy. Recently, the Multi-domain Airway Tree Modeling (ATM'22) challenge released a large dataset, both enabling training of deep-learning based models and bringing substantial improvement of the state-of-the-art for the airway segmentation task. However, the ATM'22 dataset includes few patients with severe pathologies affecting the airway tree anatomy. In this study, we introduce a new public benchmark dataset (AeroPath), consisting of 27 CT images from patients with pathologies ranging from emphysema to large tumors, with corresponding trachea and bronchi annotations. Second, we present a multiscale fusion design for automatic airway segmentation. Models were trained on the ATM'22 dataset, tested on the AeroPath dataset, and further evaluated against competitive open-source methods. The same performance metrics as used in the ATM'22 challenge were used to benchmark the different considered approaches. Lastly, an open web application is developed, to easily test the proposed model on new data. The results demonstrated that our proposed architecture predicted topologically correct segmentations for all the patients included in the AeroPath dataset. The proposed method is robust and able to handle various anomalies, down to at least the fifth airway generation. In addition, the AeroPath dataset, featuring patients with challenging pathologies, will contribute to development of new state-of-the-art methods. The AeroPath dataset and the web application are made openly available.Comment: 13 pages, 5 figures, submitted to Scientific Report

    Optimizing parameters of an open-source airway segmentation algorithm using different CT images.

    Get PDF
    Background: Computed tomography (CT) helps physicians locate and diagnose pathological conditions. In some conditions, having an airway segmentation method which facilitates reconstruction of the airway from chest CT images can help hugely in the assessment of lung diseases. Many efforts have been made to develop airway segmentation algorithms, but methods are usually not optimized to be reliable across different CT scan parameters. Methods: In this paper, we present a simple and reliable semi-automatic algorithm which can segment tracheal and bronchial anatomy using the open-source 3D Slicer platform. The method is based on a region growing approach where trachea, right and left bronchi are cropped and segmented independently using three different thresholds. The algorithm and its parameters have been optimized to be efficient across different CT scan acquisition parameters. The performance of the proposed method has been evaluated on EXACT’09 cases and local clinical cases as well as on a breathing pig lung phantom using multiple scans and changing parameters. In particular, to investigate multiple scan parameters reconstruction kernel, radiation dose and slice thickness have been considered. Volume, branch count, branch length and leakage presence have been evaluated. A new method for leakage evaluation has been developed and correlation between segmentation metrics and CT acquisition parameters has been considered. Results: All the considered cases have been segmented successfully with good results in terms of leakage presence. Results on clinical data are comparable to other teams’ methods, as obtained by evaluation against the EXACT09 challenge, whereas results obtained from the phantom prove the reliability of the method across multiple CT platforms and acquisition parameters. As expected, slice thickness is the parameter affecting the results the most, whereas reconstruction kernel and radiation dose seem not to particularly affect airway segmentation. Conclusion: The system represents the first open-source airway segmentation platform. The quantitative evaluation approach presented represents the first repeatable system evaluation tool for like-for-like comparison between different airway segmentation platforms. Results suggest that the algorithm can be considered stable across multiple CT platforms and acquisition parameters and can be considered as a starting point for the development of a complete airway segmentation algorithm

    The paranasal sinuses: three-dimensional reconstruction, photo-realistic imaging, and virtual endoscopy

    Get PDF
      Background: The purpose of the study was to create computer-aided design models of the paranasal sinuses (frontal, maxillary, and sphenoid) and to perform virtual endoscopy (VE) to them by using virtual reality modelling language technique. Materials and methods: The visible human dataset was used as the input imaging data. The Surfdriver software package was applied on these images to reconstruct the paranasal sinuses as 3-dimensional (3D) computer-aided design models. These models were post-processed in Cinema 4D to perform the photorealistic imaging and VE of the paranasal sinuses. Results: The volumes of the maxillary sinuses were 24747.89 mm3 on the right and 29008.78 mm3 on the left. As for sphenoidal sinuses, an enormous variation was seen between the right and left cavities. The sphenoidal sinuses were 1995.90 mm3 on the right and 10228.93 mm3 on the left while the frontal sinuses were 20805.67 mm3 on the right and 18048.85 mm3 on the left. The largest sinus was left maxillary sinus by volume. Right frontal sinus was the largest sinus by surface area. It was calculated as 6002.73 mm2. Our methodological outcomes proved that Surfdriver and Cinema 4D pair could be reliably used for 3D reconstructions, photo realistic imaging and creating 3D virtual environments from the serial sections of the anatomical structures. Conclusions: This technique allows students, researchers, and surgeons to perform noninvasive visualisation, simulation, and precise quantitative measurements of internal structures of the body. It was developed as a complementary tool for endoscopic surgery. It could be especially preferable for the patients who could not tolerate flexible or rigid endoscopy
    • 

    corecore