311 research outputs found

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Towards Robot Autonomy in Medical Procedures Via Visual Localization and Motion Planning

    Get PDF
    Robots performing medical procedures with autonomous capabilities have the potential to positively effect patient care and healthcare system efficiency. These benefits can be realized by autonomous robots facilitating novel procedures, increasing operative efficiency, standardizing intra- and inter-physician performance, democratizing specialized care, and focusing the physician’s time on subtasks that best leverage their expertise. However, enabling medical robots to act autonomously in a procedural environment is extremely challenging. The deforming and unstructured nature of the environment, the lack of features in the anatomy, and sensor size constraints coupled with the millimeter level accuracy required for safe medical procedures introduce a host of challenges not faced by robots operating in structured environments such as factories or warehouses. Robot motion planning and localization are two fundamental abilities for enabling robot autonomy. Motion planning methods compute a sequence of safe and feasible motions for a robot to accomplish a specified task, where safe and feasible are defined by constraints with respect to the robot and its environment. Localization methods estimate the position and orientation of a robot in its environment. Developing such methods for medical robots that overcome the unique challenges in procedural environments is critical for enabling medical robot autonomy. In this dissertation, I developed and evaluated motion planning and localization algorithms towards robot autonomy in medical procedures. A majority of my work was done in the context of an autonomous medical robot built for enhanced lung nodule biopsy. First, I developed a dataset of medical environments spanning various organs and procedures to foster future research into medical robots and automation. I used this data in my own work described throughout this dissertation. Next, I used motion planning to characterize the capabilities of the lung nodule biopsy robot compared to existing clinical tools and I highlighted trade-offs in robot design considerations. Then, I conducted a study to experimentally demonstrate the benefits of the autonomous lung robot in accessing otherwise hard-to-reach lung nodules. I showed that the robot enables access to lung regions beyond the reach of existing clinical tools with millimeter-level accuracy sufficient for accessing the smallest clinically operable nodules. Next, I developed a localization method to estimate the bronchoscope’s position and orientation in the airways with respect to a preoperatively planned needle insertion pose. The method can be used by robotic bronchoscopy systems and by traditional manually navigated bronchoscopes. The method is designed to overcome challenges with tissue motion and visual homogeneity in the airways. I demonstrated the success of this method in simulated lungs undergoing respiratory motion and showed the method’s ability to generalize across patients.Doctor of Philosoph

    A comprehensive evaluation of work and simulation based assessment in otolaryngology training

    Get PDF
    Introduction: The otolaryngology curriculum requires trainees to show evidence of operative competence before completion of training. The General Medical Council recommended that structured assessment be used throughout training to monitor and guide trainee progression. Despite the reduction in operative exposure and the variation in trainee performance, a ‘one size fits all’ approach continues to be applied. The number of procedures performed remains the main indicator of competence. Objectives: To analyse the utilisation, reliability and validity of workplace-based assessments in otolaryngology training. To identify, develop and validate a series of simulation platforms suitable for incorporation into the otolaryngology curriculum. To develop a model of interchangeable workplace- and simulation-based assessment that reflects trainee’s trajectory, audit the delivery of training and set milestones for modular learning. Methods: A detailed review of the literature identified a list of procedure-specific assessment tools as well as simulators suitable to be used as assessment platforms. A simulation-integrated training programme was piloted and models were tested for feasibility, face, content and construct validity before being incorporated into the North London training programme. The outcomes of workplace- and simulation-based assessments of all core and specialty otolaryngology trainees were collated and analysed. Results: The outcomes of 6535 workplace-based assessments were analysed. The strengths and weaknesses of 4 different assessment tools are highlighted. Validated platforms utilising cadavers, animal tissue, synthetic material and virtual reality simulators were incorporated into the curriculum. 60 trainees and 40 consultants participated in the process and found it of great educational value. Conclusion: Assessment with structured feedback is integral to surgical training. Assessment using validated simulation modules can complement that undertaken in the workplace. The outcomes of structures assessments can be used to monitor and guide trainee trajectory at individual and regional level. The derived learning curves can shape and audit future otolaryngological training.Open Acces

    Surgical spectral imaging

    Get PDF
    Recent technological developments have resulted in the availability of miniaturised spectral imaging sensors capable of operating in the multi- (MSI) and hyperspectral imaging (HSI) regimes. Simultaneous advances in image-processing techniques and artificial intelligence (AI), especially in machine learning and deep learning, have made these data-rich modalities highly attractive as a means of extracting biological information non-destructively. Surgery in particular is poised to benefit from this, as spectrally-resolved tissue optical properties can offer enhanced contrast as well as diagnostic and guidance information during interventions. This is particularly relevant for procedures where inherent contrast is low under standard white light visualisation. This review summarises recent work in surgical spectral imaging (SSI) techniques, taken from Pubmed, Google Scholar and arXiv searches spanning the period 2013–2019. New hardware, optimised for use in both open and minimally-invasive surgery (MIS), is described, and recent commercial activity is summarised. Computational approaches to extract spectral information from conventional colour images are reviewed, as tip-mounted cameras become more commonplace in MIS. Model-based and machine learning methods of data analysis are discussed in addition to simulation, phantom and clinical validation experiments. A wide variety of surgical pilot studies are reported but it is apparent that further work is needed to quantify the clinical value of MSI/HSI. The current trend toward data-driven analysis emphasises the importance of widely-available, standardised spectral imaging datasets, which will aid understanding of variability across organs and patients, and drive clinical translation

    Pose Estimation using a C-Arm Fluoroscope

    Get PDF
    Os métodos clínicos que são realizados com recurso a tecnologias de imagiologia têm registado um aumento de popularidade nas últimas duas décadas. Os procedimentos tradicionais usados em cirurgia têm sido substituídos por métodos minimamente invasivos de forma a conseguir diminuir os custos associados e aperfeiçoar factores relacionados com a produtividade. Procedimentos clínicos modernos como a broncoscopia e a cardiologia são caracterizados por se focarem na minimização de acções invasivas, com os arcos em ‘C’ a adoptarem um papel relevante nesta área. Apesar de o arco em ‘C’ ser uma tecnologia amplamente utilizada no auxílio da navegação em intervenções minimamente invasivas, este falha na qualidade da informação fornecida ao cirurgião. A informação obtida em duas dimensões não é suficiente para proporcionar uma compreensão total da localização tridimensional da região de interesse, revelando-se como uma tarefa essencial o estabelecimento de um método que permita a aquisição de informação tridimensional. O primeiro passo para alcançar este objectivo foi dado ao definir um método que permite a estimativa da posição e orientação de um objecto em relação ao arco em ‘C’. De forma a realizar os testes com o arco em ‘C’, a geometria deste teve que ser inicialmente definida e a calibração do sistema feita. O trabalho desenvolvido e apresentado nesta tese foca-se num método que provou ser suficientemente sustentável e eficiente para se estabelecer como um ponto de partida no caminho para alcançar o objectivo principal: o desenvolvimento de uma técnica que permita o aperfeiçoamento da qualidade da informação adquirida com o arco em ‘C’ durante uma intervenção clínica.Clinical methods that are performed using image-related technologies, image-guided procedures, have been registering an increase of popularity in the last two decades. Traditional surgical methods have been replaced by minimally invasive methods in order to achieve a decrease in the overall costs and enhance productivity related aspects. Modern medical procedures such as bronchoscopy and cardiology are characterized by a focus on the minimization of invasive actions, with C-arm X-ray imaging devices having an important role in the field. Although C-arm fluoroscope systems are a widely used technology to aid navigation in minimally invasive interventional events, they often lack on the information quality provided to the surgeon. The two-dimensional information that is obtained is not enough to provide full knowledge on the three-dimensional location of the region of interest, being the establishment of a method that is able to offer three- -dimensional information an essential task to be done. A first step towards the accomplishment of this goal was took by defining a method that allows the estimation of the position and orientation (pose) of one object regarding the C-arm system. In order to run this tests with a C-arm system, its geometry had to be initially defined and the calibration of the system done. The work developed and presented on this thesis focus on a method that proved itself to be sustainable and efficient enough to provide a solid basis to reach the main goal: the achievement of a technique that allows the improvement of the acquired information with a C-arm system during an intervention

    Design and fabrication of silicone-based composite tissue-mimicking phantoms for medical training

    Get PDF
    Silicone-based composite phantoms were fabricated to be used as an education model in medical training. A matrix of silicone formulations was tracked to mimic the ultrasonography, mammography, surgical, and microsurgical responses of different human tissue and organs. The performance of two different additives: i) silicone oil and ii) vinyl-terminated poly (dimethylsiloxane) (PDMS) were monitored with the acoustic setup and evaluated by the surgeons. Breast cancer is one of the most common types of cancer among women, and early diagnosis significantly improves the patient outcomes. The surgeons-in-training necessitate the vast amount of practice to facilitate a noteworthy contribution to this outcome. Therefore, breast simulation models that contain skin layer, inner breast tissue, and tumor structures which allow the collection of samples with biopsy needle were fabricated to be used in ultrasonography, as well as mammography models to be used in tumor diagnostics, and breast oncoplasty models for surgeons to practice their suturing skills. Development of microsurgical techniques signifies a foremost advance in the intervention of the injured peripheral nerves and with the aid of the operating microscopes; it is possible to evaluate the severity of the neural trauma. The advanced microsurgical skills of surgeons are essential for the success of the microsurgery, and in turn for the preservation of the nerve continuity. With this motive, a peripheral nerve phantom that contains skin layer, fascia tissue, epineurium, connective tissue, the fascicles, and the muscle layer has been designed. Herein, we highlight the fabrication of a realistic, durable, accessible, and cost-effective training platform that contains breast ultrasonography, mammography, and oncoplasty models, as well as peripheral nerve with complex hierarchical layers. For training purposes, closest media to reality, fresh cadavers, are hard to obtain due to their price and/or unavailability. Hence, a variety of synthetic tissues were also designed through the optimization of formulations of silicone. Surgical simulation models that mimic various human tissue and organs such as i) multi layers of skin, ii) axilla and axillary lymph nodes, iii) veins, iv) isthmus of the thyroid gland, cricoid cartilage, tongue, larynx, esophagus, tracheal rings, and bronchial tree for the tracheostomy and bronchoscopy models were fabricated

    Learning-based depth and pose prediction for 3D scene reconstruction in endoscopy

    Get PDF
    Colorectal cancer is the third most common cancer worldwide. Early detection and treatment of pre-cancerous tissue during colonoscopy is critical to improving prognosis. However, navigating within the colon and inspecting the endoluminal tissue comprehensively are challenging, and success in both varies based on the endoscopist's skill and experience. Computer-assisted interventions in colonoscopy show much promise in improving navigation and inspection. For instance, 3D reconstruction of the colon during colonoscopy could promote more thorough examinations and increase adenoma detection rates which are associated with improved survival rates. Given the stakes, this thesis seeks to advance the state of research from feature-based traditional methods closer to a data-driven 3D reconstruction pipeline for colonoscopy. More specifically, this thesis explores different methods that improve subtasks of learning-based 3D reconstruction. The main tasks are depth prediction and camera pose estimation. As training data is unavailable, the author, together with her co-authors, proposes and publishes several synthetic datasets and promotes domain adaptation models to improve applicability to real data. We show, through extensive experiments, that our depth prediction methods produce more robust results than previous work. Our pose estimation network trained on our new synthetic data outperforms self-supervised methods on real sequences. Our box embeddings allow us to interpret the geometric relationship and scale difference between two images of the same surface without the need for feature matches that are often unobtainable in surgical scenes. Together, the methods introduced in this thesis help work towards a complete, data-driven 3D reconstruction pipeline for endoscopy

    Open-source virtual bronchoscopy for image guided navigation

    Get PDF
    This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance
    corecore