1,004 research outputs found

    Dual-camera infrared guidance for computed tomography biopsy procedures

    Get PDF
    A CT-guided biopsy is a specialised surgical procedure whereby a needle is used to withdraw tissue or fluid specimen from a lesion of interest. The needle is guided while being viewed by a clinician on a computed tomography (CT) scan. CT guided biopsies invariably expose patients and operators to high dosage of radiation and are lengthy procedures where the lack of spatial referencing while guiding the needle along the required entry path are some of the diffculties currently encountered. This research focuses on addressing two of the challenges clinicians currently face when performing CT-guided biopsy procedures. The first challenge is the lack of spatial referencing during a biopsy procedure, with the requirement for improved accuracy and reduction in the number of repeated scans. In order to achieve this an infrared navigation system was designed and implemented where an existing approach was subsequently extended to help guide the clinician in advancing the biopsy needle. This extended algorithm computed a scaled estimate of the needle endpoint and assists with navigating the biopsy needle through a dedicated and custom built graphical user interface. The second challenge was to design and implement a training environment where clinicians could practice different entry angles and scenarios. A prototype training module was designed and built to provide simulated biopsy procedures in order to help increase spatial referencing. Various experiments and different scenarios were designed and tested to demonstrate the correctness of the algorithm and provide real-life simulated scenarios where the operators had a chance to practice different entry angles and familiarise themselves with the equipment. A comprehensive survey was also undertaken to investigate the advantages and disadvantages of the system

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Human performance in the task of port placement for biosensor use

    Full text link
    Background We conducted a study of participants' abilities to place a laparoscopic port for in vivo biosensor use. Biosensors have physical limitations that make port placement crucial to proper data collection. A new port placement algorithm enabled evaluation of port locations, using segmented patient data in a virtual environment. Methods Port placement scoring algorithms were integrated into an image-guided surgery system. Virtual test scenes were created to evaluate various scenarios encountered during biosensor use. Participants were scored based on their ability to choose a port location from which points of interest could be scanned with a biosensor. Participants' scores were also compared to those of a port placement algorithm. Results The port placement algorithm consistently outscored participants by 10–25%. Participants were inconsistent from trial to trial and from participant to participant. Conclusion Port placement for biosensor procedures could be improved through training or augmentation. Copyright © 2010 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75793/1/300_ftp.pd

    Validazione di un dispositivo indossabile basato sulla realta aumentata per il riposizionamento del mascellare superiore

    Get PDF
    Aim: We present a newly designed, localiser-free, head-mounted system featuring augmented reality (AR) as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. Also, we implement a novel and ergonomic strategy designed to present AR information to the operating surgeon (hPnP). Methods: The head-mounted wearable system was developed as a stand- alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, AR-assisted maxillary repositioning. In vitro testing was conducted on a physical replica of a human skull. Surgical accuracy was measured. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. Results: The mean linear error was 1.70±0.51mm. The axial errors were 0.89±0.54mm on the sagittal axis, 0.60±0.20mm on the frontal axis, and 1.06±0.40mm on the craniocaudal axis. Mean angular errors were also computed. Pitch: 3.13°±1.89°; Roll: 1.99°±0.95°; Yaw: 3.25°±2.26°. No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. Conclusion: Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions.Obiettivo: Presentare un nuovo sistema indossabile, privo di sistema di tracciamento esterno, che utilizzi la realtà aumentata come ausilio alla chirurgia ossea maxillo-facciale. Abbiamo validato il dispositivo. Inoltre, abbiamo implementato un nuovo metodo per presentare le informazioni aumentate al chirurgo (hPnP). Metodi: Le caratteristiche di visualizzazione del sistema, basato sul paradigma video see-through, sono state sviluppate specificamente per la chirurgia ossea maxillo-facciale. Il dispositivo è progettato per mostrare la pianificazione virtuale della chirurgia sovrapponendola all’anatomia del paziente. Abbiamo implementato un metodo che consente una tecnica senza splint, basata sulla realtà aumentata, per il riposizionamento del mascellare superiore. Il test in vitro è stato condotto su una replica di un cranio umano. La precisione chirurgica è stata misurata confrontando i risultati reali con quelli attesi. Il test è stato condotto utilizzando tre pianificazioni chirurgiche di crescente complessità, per nove operatori con diversi livelli di abilità chirurgica. Risultati: L'errore lineare medio è stato di 1,70±0,51mm. Gli errori assiali erano: 0,89±0,54mm sull'asse sagittale, 0,60±0,20mm sull'asse frontale, e 1,06±0,40mm sull'asse craniocaudale. Anche gli errori angolari medi sono stati calcolati. Beccheggio: 3.13°±1,89°; Rollio: 1,99°±0,95°; Imbardata: 3.25°±2,26°. Nessuna differenza significativa in termini di errore è stata rilevata tra gli operatori. Il feedback dei chirurghi è stato soddisfacente; tutti i test sono stati completati entro 15 minuti e lo strumento è stato considerato comodo e utilizzabile nella pratica. Conclusione: Il nostro dispositivo sembra essersi dimostrato preciso se utilizzato per eseguire il riposizionamento del mascellare superiore senza splint. I nostri risultati suggeriscono che il metodo può potenzialmente essere esteso ad altre procedure chirurgiche sullo scheletro facciale. Inoltre, appare utile procedere ai test in vivo per valutare la precisione chirurgica in condizioni cliniche reali

    Engineering precision surgery: Design and implementation of surgical guidance technologies

    Get PDF
    In the quest for precision surgery, this thesis introduces several novel detection and navigation modalities for the localization of cancer-related tissues in the operating room. The engineering efforts have focused on image-guided surgery modalities that use the complementary tracer signatures of nuclear and fluorescence radiation. The first part of the thesis covers the use of “GPS-like” navigation concepts to navigate fluorescence cameras during surgery, based on SPECT images of the patient. The second part of the thesis introduces several new imaging modalities such as a hybrid 3D freehand Fluorescence and freehand SPECT imaging and navigation device. Furthermore, to improve the detection of radioactive tracer-emissions during robot-assisted laparoscopic surgery, a tethered DROP-IN gamma probe is introduced. The clinical indications that are used to evaluate the new technologies were all focused on sentinel lymph node procedures in urology (i.e. prostate and penile cancer). Nevertheless, all presented techniques are of such a nature, that they can be applied to different surgical indications, including sentinel lymph node and tumor-receptor-targeted procedures, localization the primary tumor and metastatic spread. This will hopefully contribute towards more precise, less invasive and more effective surgical procedures in the field of oncology. Crystal Photonics GmbH Eurorad S.A. Intuitive Surgical Inc. KARL STORZ Endoscopie Nederland B.V. MILabs B.V. PI Medical Diagnostic Equipment B.V. SurgicEye GmbH Verb Surgical Inc.LUMC / Geneeskund

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
    corecore