8 research outputs found

    Image-Based Bronchial Anatomy Codification for Biopsy Guiding in Video Bronchoscopy

    Get PDF
    Bronchoscopy examinations allow biopsy of pulmonary nodules with minimum risk for the patient. Even for experienced bronchoscopists, it is difficult to guide the bronchoscope to most distal lesions and obtain an accurate diagnosis. This paper presents an image-based codification of the bronchial anatomy for bronchoscopy biopsy guiding. The 3D anatomy of each patient is codified as a binary tree with nodes representing bronchial levels and edges labeled using their position on images projecting the 3D anatomy from a set of branching points. The paths from the root to leaves provide a codification of navigation routes with spatially consistent labels according to the anatomy observes in video bronchoscopy explorations. We evaluate our labeling approach as a guiding system in terms of the number of bronchial levels correctly codified, also in the number of labels-based instructions correctly supplied, using generalized mixed models and computer-generated data. Results obtained for three independent observers prove the consistency and reproducibility of our guiding system. We trust that our codification based on viewer's projection might be used as a foundation for the navigation process in Virtual Bronchoscopy systems

    Generative localisation with uncertainty estimation through video-CT data for bronchoscopic biopsy

    Get PDF
    Robot-assisted endobronchial intervention requires accurate localisation based on both intra- and pre-operative data. Most existing methods achieve this by registering 2D videos with 3D CT models according to a defined similarity metric with local features. Instead, we formulate the bronchoscopic localisation as a learning-based global localisation using deep neural networks. The proposed network consists of two generative architectures and one auxiliary learning component. The cycle generative architecture bridges the domain variance between the real bronchoscopic videos and virtual views derived from pre-operative CT data so that the proposed approach can be trained through a large number of generated virtual images but deployed through real images. The auxiliary learning architecture leverages complementary relative pose regression to constrain the search space, ensuring consistent global pose predictions. Most importantly, the uncertainty of each global pose is obtained through variational inference by sampling within the learned underlying probability distribution. Detailed validation results demonstrate the localisation accuracy with reasonable uncertainty achieved and its potential clinical value

    BronchoX : bronchoscopy exploration software for biopsy intervention planning

    Get PDF
    Altres: ACCIO Tecniospring TECSPR17-1-0045Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors' solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea

    Towards Robot Autonomy in Medical Procedures Via Visual Localization and Motion Planning

    Get PDF
    Robots performing medical procedures with autonomous capabilities have the potential to positively effect patient care and healthcare system efficiency. These benefits can be realized by autonomous robots facilitating novel procedures, increasing operative efficiency, standardizing intra- and inter-physician performance, democratizing specialized care, and focusing the physician’s time on subtasks that best leverage their expertise. However, enabling medical robots to act autonomously in a procedural environment is extremely challenging. The deforming and unstructured nature of the environment, the lack of features in the anatomy, and sensor size constraints coupled with the millimeter level accuracy required for safe medical procedures introduce a host of challenges not faced by robots operating in structured environments such as factories or warehouses. Robot motion planning and localization are two fundamental abilities for enabling robot autonomy. Motion planning methods compute a sequence of safe and feasible motions for a robot to accomplish a specified task, where safe and feasible are defined by constraints with respect to the robot and its environment. Localization methods estimate the position and orientation of a robot in its environment. Developing such methods for medical robots that overcome the unique challenges in procedural environments is critical for enabling medical robot autonomy. In this dissertation, I developed and evaluated motion planning and localization algorithms towards robot autonomy in medical procedures. A majority of my work was done in the context of an autonomous medical robot built for enhanced lung nodule biopsy. First, I developed a dataset of medical environments spanning various organs and procedures to foster future research into medical robots and automation. I used this data in my own work described throughout this dissertation. Next, I used motion planning to characterize the capabilities of the lung nodule biopsy robot compared to existing clinical tools and I highlighted trade-offs in robot design considerations. Then, I conducted a study to experimentally demonstrate the benefits of the autonomous lung robot in accessing otherwise hard-to-reach lung nodules. I showed that the robot enables access to lung regions beyond the reach of existing clinical tools with millimeter-level accuracy sufficient for accessing the smallest clinically operable nodules. Next, I developed a localization method to estimate the bronchoscope’s position and orientation in the airways with respect to a preoperatively planned needle insertion pose. The method can be used by robotic bronchoscopy systems and by traditional manually navigated bronchoscopes. The method is designed to overcome challenges with tissue motion and visual homogeneity in the airways. I demonstrated the success of this method in simulated lungs undergoing respiratory motion and showed the method’s ability to generalize across patients.Doctor of Philosoph
    corecore