2,122 research outputs found

    Retopologizing MRI and Diffusion Tensor Tractography Datasets for Real-time Interactivity

    Get PDF
    Current technology allows MRI and other patient data to be translated into voxel-based 3D models for the purpose of visualization. However, these voxel models are extremely complex and are not suitable for rapid real-time manipulation. For true “on-the-fly” interactivity, polygon-based models must be hand-built using other methods and imported into a game engine.This project develops an algorithm to translate complex datasets into optimized models for real-time interactivity without sacrificing accuracy of the original imaging modality. A working prototype, ready for integration into game engines, has been built with brain tumor data exported from OSIRIX1 and 3D Slicer2 via Mudbox3, retopologized in 3D Coat4 and re-imported to Maya5. White matter tracts detected by Diffusion Tensor Tractography are exported as volume models using 3D Slicer.The model has been integrated into the Unreal Development Kit (UDK)6 game engine to facilitate real-time interactivity across multiple platforms, including Mac, PC, Apple iOS, Google Android, Xbox 360, and SONY PlayStation. New techniques are being explored to automate and accelerate the process of retopologizing models. 1.Osirix – Advanced Open-Source PACS Workstation DICOM Viewer http://www.osirix-viewer.com/ 2. 3D Slicer- A multi-platform, free and open source software package for visualization and medical image computing http://www.3dslicer.org 3.MudBox – Autodesk¨ Mudbox™ 3D digital sculpting and digital painting software http://usa.autodesk. com 4.3D Coat – Retopologizing and 3D sculpting software http://3d-coat.com 5.MAYA - Autodesk¨Maya¨ 3D animation software delivers an end-to-end creative workflow with compression tools for animation, modeling, simulation, visual effects, rendering, matchmoving, and compositing on a highly extensible production platform http://usa.autodesk.com 6.Unreal Engine 3 – a complete development framework for PCs, iOS, Xbox 360¨, and PlayStation¨ 3, providing a vast array of core technologies, content creation tools and support infrastructure http://www. unrealengine.com

    Toward a real time multi-tissue Adaptive Physics-Based Non-Rigid Registration framework for brain tumor resection

    Get PDF
    This paper presents an adaptive non-rigid registration method for aligning pre-operative MRI with intra-operative MRI (iMRI) to compensate for brain deformation during brain tumor resection. This method extends a successful existing Physics-Based Non-Rigid Registration (PBNRR) technique implemented in ITKv4.5. The new method relies on a parallel adaptive heterogeneous biomechanical Finite Element (FE) model for tissue/tumor removal depicted in the iMRI. In contrast the existing PBNRR in ITK relies on homogeneous static FE model designed for brain shift only (i.e., it is not designed to handle brain tumor resection). As a result, the new method (1) accurately captures the intra-operative deformations associated with the tissue removal due to tumor resection and (2) reduces the end-to-end execution time to within the time constraints imposed by the neurosurgical procedure. The evaluation of the new method is based on 14 clinical cases with: (i) brain shift only (seven cases), (ii) partial tumor resection (two cases), and (iii) complete tumor resection (five cases). The new adaptive method can reduce the alignment error up to seven and five times compared to a rigid and ITK\u27s PBNRR registration methods, respectively. On average, the alignment error of the new method is reduced by 9.23 and 5.63 mm compared to the alignment error from the rigid and PBNRR method implemented in ITK. Moreover, the total execution time for all the case studies is about 1 min or less in a Linux Dell workstation with 12 Intel Xeon 3.47 GHz CPU cores and 96 GB of RAM

    Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery

    Get PDF
    Purpose Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS’s navigation system overcoming the earlier version’s issues, aiming to move the RAFS system into a surgical environment. Methods The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Results Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about 0.88 ±0.2mm (phantom) and 1.15±0.8mm (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error 1.2±0.3mm, 2±1∘). Conclusion Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Resection of pediatric intracerebral tumors with the aid of intraoperative real-time 3-D ultrasound

    Get PDF
    Purpose: Intraoperative ultrasound (IOUS) has become a useful tool employed daily in neurosurgical procedures. In pediatric patients, IOUS offers a radiation-free and safe imaging method. This study aimed to evaluate the use of a new real-time 3-D IOUS technique (RT-3-D IOUS) in our pediatric patient cohort. Material and methods: Over 24months, RT-3-D IOUS was performed in 22 pediatric patients (8 girls and 14 boys) with various brain tumors. These lesions were localized by a standard navigation system followed by analyses before, intermittently during, and after neurosurgical resection using the iU22 ultrasound system (Philips, Bothell, USA) connected to the RT-3-D probe (X7-2). Results: In all 22 patients, real-time 3-D ultrasound images of the lesions could be obtained during neurosurgical resection. Based on this imaging method, rapid orientation in the surgical field and the approach for the resection could be planned for all patients. In 18 patients (82%), RT-3-D IOUS revealed a gross total resection with a favorable neurological outcome. Conclusion: RT-3-D IOUS provides the surgeon with advanced orientation at the tumor site via immediate live two-plane imaging. However, navigation systems have yet to be combined with RT-3-D IOUS. This combination would further improve intraoperative localizatio

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance

    Get PDF
    Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women\u27s Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance

    Impact of Soft Tissue Heterogeneity on Augmented Reality for Liver Surgery

    Get PDF
    International audienceThis paper presents a method for real-time augmented reality of internal liver structures during minimally invasive hepatic surgery. Vessels and tumors computed from pre-operative CT scans can be overlaid onto the laparoscopic view for surgery guidance. Compared to current methods, our method is able to locate the in-depth positions of the tumors based on partial three-dimensional liver tissue motion using a real-time biomechanical model. This model permits to properly handle the motion of internal structures even in the case of anisotropic or heterogeneous tissues, as it is the case for the liver and many anatomical structures. Experimentations conducted on phantom liver permits to measure the accuracy of the augmentation while real-time augmentation on in vivo human liver during real surgery shows the benefits of such an approach for minimally invasive surgery
    corecore