19 research outputs found

    Navigated Ultrasound in Laparoscopic Surgery

    Get PDF

    Locally rigid, vessel-based registration for laparoscopic liver surgery

    Get PDF
    Purpose: Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. Methods: We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. Results: Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. Conclusion: In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Performance of image guided navigation in laparoscopic liver surgery – A systematic review

    Get PDF
    Background: Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. Methods: Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. Results: Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8–15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. Conclusions: Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard

    Registration of ultrasound and computed tomography for guidance of laparoscopic liver surgery

    Get PDF
    Laparoscopic Ultrasound (LUS) imaging is a standard tool used for image-guidance during laparoscopic liver resection, as it provides real-time information on the internal structure of the liver. However, LUS probes are di cult to handle and their resulting images hard to interpret. Additionally, some anatomical targets such as tumours are not always visible, making the LUS guidance less e ective. To solve this problem, registration between the LUS images and a pre-operative Computed Tomography (CT) scan using information from blood vessels has been previously proposed. By merging these two modalities, the relative position between the LUS images and the anatomy of CT is obtained and both can be used to guide the surgeon. The problem of LUS to CT registration is specially challenging, as besides being a multi-modal registration, the eld of view of LUS is signi cantly smaller than that of CT. Therefore, this problem becomes poorly constrained and typically an accurate initialisation is needed. Also, the liver is highly deformed during laparoscopy, complicating the problem further. So far, the methods presented in the literature are not clinically feasible as they depend on manually set correspondences between both images. In this thesis, a solution for this registration problem that may be more transferable to the clinic is proposed. Firstly, traditional registration approaches comprised of manual initialisation and optimisation of a cost function are studied. Secondly, it is demonstrated that a globally optimal registration without a manual initialisation is possible. Finally, a new globally optimal solution that does not require commonly used tracking technologies is proposed and validated. The resulting approach provides clinical value as it does not require manual interaction in the operating room or tracking devices. Furthermore, the proposed method could potentially be applied to other image-guidance problems that require registration between ultrasound and a pre-operative scan

    REAL-TIME 4D ULTRASOUND RECONSTRUCTION FOR IMAGE-GUIDED INTRACARDIAC INTERVENTIONS

    Get PDF
    Image-guided therapy addresses the lack of direct vision associated with minimally- invasive interventions performed on the beating heart, but requires effective intraoperative imaging. Gated 4D ultrasound reconstruction using a tracked 2D probe generates a time-series of 3D images representing the beating heart over the cardiac cycle. These images have a relatively high spatial resolution and wide field of view, and ultrasound is easily integrated into the intraoperative environment. This thesis presents a real-time 4D ultrasound reconstruction system incorporated within an augmented reality environment for surgical guidance, whose incremental visualization reduces common acquisition errors. The resulting 4D ultrasound datasets are intended for visualization or registration to preoperative images. A human factors experiment demonstrates the advantages of real-time ultrasound reconstruction, and accuracy assessments performed both with a dynamic phantom and intraoperatively reveal RMS localization errors of 2.5-2.7 mm, and 0.8 mm, respectively. Finally, clinical applicability is demonstrated by both porcine and patient imaging

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Latency and Distortion compensation in Augmented Environments using Electromagnetic trackers

    Get PDF
    Augmented reality (AR) systems are often used to superimpose virtual objects or information on a scene to improve situational awareness. Delays in the display system or inaccurate registration of objects destroy the sense of immersion a user experiences when using AR systems. AC electromagnetic trackers are ideally for these applications when combined with head orientation prediction to compensate for display system delays. Unfortunately, these trackers do not perform well in environments that contain conductive or ferrous materials due to magnetic field distortion without expensive calibration techniques. In our work we focus on both the prediction and distortion compensation aspects of this application, developing a “small footprint” predictive filter for display lag compensation and a simplified calibration system for AC magnetic trackers. In the first phase of our study we presented a novel method of tracking angular head velocity from quaternion orientation using an Extended Kalman Filter in both single model (DQEKF) and multiple model (MMDQ) implementations. In the second phase of our work we have developed a new method of mapping the magnetic field generated by the tracker without high precision measurement equipment. This method uses simple fixtures with multiple sensors in a rigid geometry to collect magnetic field data in the tracking volume. We have developed a new algorithm to process the collected data and generate a map of the magnetic field distortion that can be used to compensation distorted measurement data

    Navigating in Patient Space Using Camera Pose Estimation Relative to the External Anatomy

    Get PDF
    Ultrasound probe localization is essential for volumetric imaging with a 2D ultrasound probe, and for establishing a recorded anatomical context for ultrasound-guided surgery and for longitudinal studies. The existing techniques for probe localization, however, require external tracking devices, making them inconvenient for clinical use. In addition, the probe pose is typically measured with respect to a fixed coordinate system independent of the patient’s anatomy, making it difficult to correlate ultrasound studies across time. This dissertation concerns the development and evaluation of a novel self-contained ultrasound probe tracking system, which navigates the probe in patient space using camera pose estimation relative to the anatomical context. As the probe moves in patient space, a video camera on the probe is used to automatically identify natural skin features and subdermal cues, and match them with a pre-acquiring high-resolution 3D surface map that serves as an atlas of the anatomy. We have addressed the problem of distinguishing rotation from translation by including an inertial navigation system (INS) to accurately measure rotation. Experiments on both a phantom containing an image of human skin (palm) as well as actual human upper extremity (fingers, palm, and wrist) validate the effectiveness of our approach. We have also developed a real-time 3D interactive visualization system that superimposes the ultrasound data within the anatomical context of the exterior of the patient, to permit accurate anatomic localization of ultrasound data. The combination of the proposed tracking approach and the visualization system may have broad implications for ultrasound imaging, permitting the compilation of volumetric ultrasound data as the 2D probe is moved, as well as comparison of real-time ultrasound scans registered with previous scans from the same anatomical location. In a broader sense, tools that self-locate by viewing the patient’s exterior could have broad beneficial impact on clinical medicine
    corecore