2,384 research outputs found

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    All-Optical 4D In Vivo Monitoring And Manipulation Of Zebrafish Cardiac Conduction

    Get PDF
    The cardiac conduction system is vital for the initiation and maintenance of the heartbeat. Over the recent years, the zebrafish (Danio rerio) has emerged as a promising model organism to study this specialized system. The embryonic zebrafish heart’s unique accessibility for light microscopy has put it in the focus of many cardiac researchers. However, imaging cardiac conduction in vivo remained a challenge. Typically, hearts had to be removed from the animal to make them accessible for fluorescent dyes and electrophysiology. Furthermore, no technique provided enough spatial and temporal resolution to study the importance of individual cells in the myocardial network. With the advent of light sheet microscopy, better camera technology, new fluorescent reporters and advanced image analysis tools, all-optical in vivo mapping of cardiac conduction is now within reach. In the course of this thesis, I developed new methods to image and manipulate cardiac conduction in 4D with cellular resolution in the unperturbed zebrafish heart. Using my newly developed methods, I could detect the first calcium sparks and reveal the onset of cardiac automaticity in the early heart tube. Furthermore, I could visualize the 4D cardiac conduction pattern in the embryonic heart and use it to study component-specific calcium transients. In addition, I could test the robustness of embryonic cardiac conduction under aggravated conditions, and found new evidence for the presence of an early ventricular pacemaker system. My results lay the foundation for novel, non-invasive in vivo studies of cardiac function and performance

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    Open-source software in medical imaging: development of OsiriX

    Get PDF
    Purpose Open source software (oss) development for medical imaging enables collaboration of individuals and groups to produce high-quality tools that meet user needs. This process is reviewed and illustrated with OsiriX, a fast DICOM viewer program for the Apple Macintosh. Materials and methods OsiriX is an oss for the Apple Macintosh under Mac OS X v10.4 or higher specifically designed for navigation and visualization of multimodality and multidimensional images: 2D Viewer, 3D Viewer, 4D Viewer (3D series with temporal dimension, for example: Cardiac-CT) and 5D Viewer (3D series with temporal and functional dimensions, for example: Cardiac-PET-CT). The 3D Viewer offers all modern rendering modes: multiplanar reconstruction, surface rendering, volume rendering and maximum Intensity projection. All these modes support 4D data and are able to produce image fusion between two different series (for example: PET-CT). OsiriX was developed using the Apple Xcode development environment and Cocoa framework as both a DICOM PACS workstation for medical imaging and an image processing software package for medical research (radiology and nuclear imaging), functional imaging, 3D imaging, confocal microscopy and molecular imaging. Results OsiriX is an open source program by Antoine Rosset, a radiologist and software developer, was designed specifically for the needs of advanced imaging modalities. The software program turns an Apple Macintosh into a DICOM PACS workstation for medical imaging and image processing. OsiriX is distributed free of charge under the GNU General Public License and its source code is available to anyone. This system illustrates how open software development for medical imaging tools can be successfully designed, implemented and disseminated. Conclusion oss development can provide useful cost effective tools tailored to specific needs and clinical tasks. The integrity and quality assurance of open software developed by a community of users does not follow the traditional conformance and certification required for commercial medical software programs. However, open software can lead to innovative solutions designed by users better suited for specific task

    Spatial Orientation in Cardiac Ultrasound Images Using Mixed Reality: Design and Evaluation

    Get PDF
    Spatial orientation is an important skill in structural cardiac imaging. Until recently, 3D cardiac ultrasound has been visualized on a flat screen by using volume rendering. Mixed reality devices enhance depth perception, spatial awareness, interaction, and integration in the physical world, which can prove advantageous with 3D cardiac ultrasound images. In this work, we describe the design of a system for rendering 4D (3D + time) cardiac ultrasound data as virtual objects and evaluate it for ease of spatial orientation by comparing it with a standard clinical viewing platform in a user study. The user study required eight participants to do timed tasks and rate their experience. The results showed that virtual objects in mixed reality provided easier spatial orientation and morphological understanding despite lower perceived image quality. Participants familiar with mixed reality were quicker to orient in the tasks. This suggests that familiarity with the environment plays an important role, and with improved image quality and increased use, mixed reality applications may perform better than conventional 3D echocardiography viewing systems.publishedVersio

    Dynamic Image Processing for Guidance of Off-pump Beating Heart Mitral Valve Repair

    Get PDF
    Compared to conventional open heart procedures, minimally invasive off-pump beating heart mitral valve repair aims to deliver equivalent treatment for mitral regurgitation with reduced trauma and side effects. However, minimally invasive approaches are often limited by the lack of a direct view to surgical targets and/or tools, a challenge that is compounded by potential movement of the target during the cardiac cycle. For this reason, sophisticated image guidance systems are required in achieving procedural efficiency and therapeutic success. The development of such guidance systems is associated with many challenges. For example, the system should be able to provide high quality visualization of both cardiac anatomy and motion, as well as augmenting it with virtual models of tracked tools and targets. It should have the capability of integrating pre-operative images to the intra-operative scenario through registration techniques. The computation speed must be sufficiently fast to capture the rapid cardiac motion. Meanwhile, the system should be cost effective and easily integrated into standard clinical workflow. This thesis develops image processing techniques to address these challenges, aiming to achieve a safe and efficient guidance system for off-pump beating heart mitral valve repair. These techniques can be divided into two categories, using 3D and 2D image data respectively. When 3D images are accessible, a rapid multi-modal registration approach is proposed to link the pre-operative CT images to the intra-operative ultrasound images. The ultrasound images are used to display the real time cardiac motion, enhanced by CT data serving as high quality 3D context with annotated features. I also developed a method to generate synthetic dynamic CT images, aiming to replace real dynamic CT data in such a guidance system to reduce the radiation dose applied to the patients. When only 2D images are available, an approach is developed to track the feature of interest, i.e. the mitral annulus, based on bi-plane ultrasound images and a magnetic tracking system. The concept of modern GPU-based parallel computing is employed in most of these approaches to accelerate the computation in order to capture the rapid cardiac motion with desired accuracy. Validation experiments were performed on phantom, animal and human data. The overall accuracy of registration and feature tracking with respect to the mitral annulus was about 2-3mm with computation time of 60-400ms per frame, sufficient for one update per cardiac cycle. It was also demonstrated in the results that the synthetic CT images can provide very similar anatomical representations and registration accuracy compared to that of the real dynamic CT images. These results suggest that the approaches developed in the thesis have good potential for a safer and more effective guidance system for off-pump beating heart mitral valve repair

    Virtual Neonatal Echocardiographic Training System (VNETS)

    Get PDF

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future
    • …
    corecore