1,020 research outputs found

    Open-source virtual bronchoscopy for image guided navigation

    Get PDF
    This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance

    Towards Robot Autonomy in Medical Procedures Via Visual Localization and Motion Planning

    Get PDF
    Robots performing medical procedures with autonomous capabilities have the potential to positively effect patient care and healthcare system efficiency. These benefits can be realized by autonomous robots facilitating novel procedures, increasing operative efficiency, standardizing intra- and inter-physician performance, democratizing specialized care, and focusing the physician’s time on subtasks that best leverage their expertise. However, enabling medical robots to act autonomously in a procedural environment is extremely challenging. The deforming and unstructured nature of the environment, the lack of features in the anatomy, and sensor size constraints coupled with the millimeter level accuracy required for safe medical procedures introduce a host of challenges not faced by robots operating in structured environments such as factories or warehouses. Robot motion planning and localization are two fundamental abilities for enabling robot autonomy. Motion planning methods compute a sequence of safe and feasible motions for a robot to accomplish a specified task, where safe and feasible are defined by constraints with respect to the robot and its environment. Localization methods estimate the position and orientation of a robot in its environment. Developing such methods for medical robots that overcome the unique challenges in procedural environments is critical for enabling medical robot autonomy. In this dissertation, I developed and evaluated motion planning and localization algorithms towards robot autonomy in medical procedures. A majority of my work was done in the context of an autonomous medical robot built for enhanced lung nodule biopsy. First, I developed a dataset of medical environments spanning various organs and procedures to foster future research into medical robots and automation. I used this data in my own work described throughout this dissertation. Next, I used motion planning to characterize the capabilities of the lung nodule biopsy robot compared to existing clinical tools and I highlighted trade-offs in robot design considerations. Then, I conducted a study to experimentally demonstrate the benefits of the autonomous lung robot in accessing otherwise hard-to-reach lung nodules. I showed that the robot enables access to lung regions beyond the reach of existing clinical tools with millimeter-level accuracy sufficient for accessing the smallest clinically operable nodules. Next, I developed a localization method to estimate the bronchoscope’s position and orientation in the airways with respect to a preoperatively planned needle insertion pose. The method can be used by robotic bronchoscopy systems and by traditional manually navigated bronchoscopes. The method is designed to overcome challenges with tissue motion and visual homogeneity in the airways. I demonstrated the success of this method in simulated lungs undergoing respiratory motion and showed the method’s ability to generalize across patients.Doctor of Philosoph

    Advances in real-time thoracic guidance systems

    Get PDF
    Substantial tissue motion: \u3e1cm) arises in the thoracic/abdominal cavity due to respiration. There are many clinical applications in which localizing tissue with high accuracy: \u3c1mm) is important. Potential applications include radiation therapy, radio frequency ablation, lung/liver biopsies, and brachytherapy seed placement. Recent efforts have made highly accurate sub-mm 3D localization of discrete points available via electromagnetic: EM) position monitoring. Technology from Calypso Medical allows for simultaneous tracking of up to three implanted wireless transponders. Additionally, Medtronic Navigation uses wired electromagnetic tracking to guide surgical tools for image guided surgery: IGS). Utilizing real-time EM position monitoring, a prototype system was developed to guide a therapeutic linear accelerator to follow a moving target: tumor) within the lung/abdomen. In a clinical setting, electromagnetic transponders would be bronchoscopically implanted into the lung of the patient in or near the tumor. These transponders would ax to the lung tissue in a stable manner and allow real-time position knowledge throughout a course of radiation therapy. During each dose of radiation, the beam is either halted when the target is outside of a given threshold, or in a later study the beam follows the target in real-time based on the EM position monitoring. We present quantitative analysis of the accuracy and efficiency of the radiation therapy tumor tracking system. EM tracking shows promise for IGS applications. Tracking the position of the instrument tip allows for minimally invasive intervention and alleviates the trauma associated with conventional surgery. Current clinical IGS implementations are limited to static targets: e.g. craniospinal, neurological, and orthopedic intervention. We present work on the development of a respiratory correlated image guided surgery: RCIGS) system. In the RCIGS system, target positions are modeled via respiratory correlated imaging: 4DCT) coupled with a breathing surrogate representative of the patient\u27s respiratory phase/amplitude. Once the target position is known with respect to the surrogate, intervention can be performed when the target is in the correct location. The RCIGS system consists of imaging techniques and custom developed software to give visual and auditory feedback to the surgeon indicating both the proper location and time for intervention. Presented here are the details of the IGS lung system along with quantitative results of the system accuracy in motion phantom, ex-vivo porcine lung, and human cadaver environments

    Tracking and Mapping in Medical Computer Vision: A Review

    Full text link
    As computer vision algorithms are becoming more capable, their applications in clinical systems will become more pervasive. These applications include diagnostics such as colonoscopy and bronchoscopy, guiding biopsies and minimally invasive interventions and surgery, automating instrument motion and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing and applying algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. We then review datasets provided in the field and the clinical needs therein. Then, we delve in depth into the algorithmic side, and summarize recent developments, which should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. Finally, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications in the field. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.Comment: 31 pages, 17 figure

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Integrating Optimization and Sampling for Robot Motion Planning with Applications in Healthcare

    Get PDF
    Robots deployed in human-centric environments, such as a person's home in a home-assistance setting or inside a person's body in a surgical setting, have the potential to have a large, positive impact on human quality of life. However, for robots to operate in such environments they must be able to move efficiently while avoiding colliding with obstacles such as objects in the person's home or sensitive anatomical structures in the person's body. Robot motion planning aims to compute safe and efficient motions for robots that avoid obstacles, but home assistance and surgical robots come with unique challenges that can make this difficult. For instance, many state of the art surgical robots have computationally expensive kinematic models, i.e., it can be computationally expensive to predict their shape as they move. Some of these robots have hybrid dynamics, i.e., they consist of multiple stages that behave differently. Additionally, it can be difficult to plan motions for robots while leveraging real-world sensor data, such as point clouds. In this dissertation, we demonstrate and empirically evaluate methods for overcoming these challenges to compute high-quality and safe motions for robots in home-assistance and surgical settings. First, we present a motion planning method for a continuum, parallel surgical manipulator that accounts for its computationally expensive kinematics. We then leverage this motion planner to optimize its kinematic design chosen prior to a surgical procedure. Next, we present a motion planning method for a 3-stage lung tumor biopsy robot that accounts for its hybrid dynamics and evaluate the robot and planner in simulation and in inflated porcine lung tissue. Next, we present a motion planning method for a home-assistance robot that leverages real-world, point-cloud obstacle representations. We then expand this method to work with a type of continuum surgical manipulator, a concentric tube robot, with point-cloud anatomical representations. Finally, we present a data-driven machine learning method for more accurately estimating the shape of concentric tube robots. By effectively addressing challenges associated with home assistance and surgical robots operating in human-centric environments, we take steps toward enabling robots to have a positive impact on human quality of life.Doctor of Philosoph

    Re-localisation of microscopic lesions in their macroscopic context for surgical instrument guidance

    Get PDF
    Optical biopsies interrogate microscopic structure in vivo with a 2mm diameter miniprobe placed in contact with the tissue for detection of lesions and assessment of disease progression. After detection, instruments are guided to the lesion location for a new optical interrogation, or for treatment, or for tissue excision during the same or a future examination. As the optical measurement can be considered as a point source of information at the surface of the tissue of interest, accurate guidance can be difficult. A method for re-localisation of the sampling point is, therefore, needed. The method presented in this thesis has been developed for biopsy site re-localisation during a surveillance examination of Barrett’s Oesophagus. The biopsy site, invisible macroscopically during conventional endoscopy, is re-localised in the target endoscopic image using epipolar lines derived from its locations given by the tip of the miniprobe visible in a series of reference endoscopic images. A confidence region can be drawn around the relocalised biopsy site from its uncertainty that is derived analytically. This thesis also presents a method to improve the accuracy of the epipolar lines derived for the biopsy site relocalisation using an electromagnetic tracking system. Simulations and tests on patient data identified the cases when the analytical uncertainty is a good approximation of the confidence region and showed that biopsy sites can be re-localised with accuracies better than 1mm. Studies on phantom and on porcine excised tissue demonstrated that an electromagnetic tracking system contributes to more accurate epipolar lines and re-localised biopsy sites for an endoscope displacement greater than 5mm. The re-localisation method can be applied to images acquired during different endoscopic examinations. It may also be useful for pulmonary applications. Finally, it can be combined with a Magnetic Resonance scanner which can steer cells to the biopsy site for tissue treatment

    Endoscopic, Anatomic OCT for Imaging and Compliance Measurement of Upper and Central Airways

    Get PDF
    Both acute airway injuries such as inhalation injury and prevalent but underdiagnosed diseases such as obstructive sleep apnea (OSA) lead not only to impaired quality of life but also to disability or even death. However, current techniques such as bronchoscopy, computed tomography and magnetic resonance imaging all have limitations, such as being semi-quantitative or the exposure to ionizing radiation or long scan times, when it comes to airway imaging. A modality that provides high-resolution, real-time, safe and minimally invasive imaging of the airways would be very beneficial in the diagnosis and treatment of airway diseases. Additionally, changes in the biomechanical properties of airway tissues associated with underlying pathophysiologic status of tissues have not been much explored. Thus, an imaging modality that also has the ability to perform elastography could be valuable in the diagnosis and treatment of inhalation injuries. Optical coherence tomography (OCT) is a rapidly developing imaging modality providing iv high-resolution and non-invasive imaging of tissue microstructure. To image the upper and central airways of pediatric patients, a specific type of OCT -- the swept-source anatomic optical coherence tomography (SSaOCT), which has a micron-level resolution and an imaging range over 10 mm is utilized. It allows direct visualization of features on airway walls as well as sub-surface structures such as cartilage and trachealis muscle. Moreover, aOCT together with a pressure catheter can be used to perform anatomic optical coherence elastography (aOCE) and measure airway compliance to predict the regions of the airway wall that are vulnerable to collapse. This provides additional diagnostic information of airways that is not easily achievable with other imaging modalities. In this dissertation, the design and performance of the two custom-built aOCT systems are described, and their ability to accurately measure airway geometry and compliance is investigated. Imaging of phantoms and animal specimens is performed, aOCE-derived compliance is calculated and the relationship between the compliance measurements and the severity of steam injury is evaluated. Results indicate that aOCT can perform accurate airway imaging as well as assess the compliance of airway tissues. The measured compliance of the airway could potentially be used as an index for grading and assessing the severity of injuries and thus aid in the diagnosis and treatment of airway inhalation injury.Doctor of Philosoph

    Scene Reconstruction Beyond Structure-from-Motion and Multi-View Stereo

    Get PDF
    Image-based 3D reconstruction has become a robust technology for recovering accurate and realistic models of real-world objects and scenes. A common pipeline for 3D reconstruction is to first apply Structure-from-Motion (SfM), which recovers relative poses for the input images and sparse geometry for the scene, and then apply Multi-view Stereo (MVS), which estimates a dense depthmap for each image. While this two-stage process is quite effective in many 3D modeling scenarios, there are limits to what can be reconstructed. This dissertation focuses on three particular scenarios where the SfM+MVS pipeline fails and introduces new approaches to accomplish each reconstruction task. First, I introduce a novel method to recover dense surface reconstructions of endoscopic video. In this setting, SfM can generally provide sparse surface structure, but the lack of surface texture as well as complex, changing illumination often causes MVS to fail. To overcome these difficulties, I introduce a method that utilizes SfM both to guide surface reflectance estimation and to regularize shading-based depth reconstruction. I also introduce models of reflectance and illumination that improve the final result. Second, I introduce an approach for augmenting 3D reconstructions from large-scale Internet photo-collections by recovering the 3D position of transient objects --- specifically, people --- in the input imagery. Since no two images can be assumed to capture the same person in the same location, the typical triangulation constraints enjoyed by SfM and MVS cannot be directly applied. I introduce an alternative method to approximately triangulate people who stood in similar locations, aided by a height distribution prior and visibility constraints provided by SfM. The scale of the scene, gravity direction, and per-person ground-surface normals are also recovered. Finally, I introduce the concept of using crowd-sourced imagery to create living 3D reconstructions --- visualizations of real places that include dynamic representations of transient objects. A key difficulty here is that SfM+MVS pipelines often poorly reconstruct ground surfaces given Internet images. To address this, I introduce a volumetric reconstruction approach that leverages scene scale and person placements. Crowd simulation is then employed to add virtual pedestrians to the space and bring the reconstruction "to life."Doctor of Philosoph
    • …
    corecore