494 research outputs found

    Rzsweep: A New Volume-Rendering Technique for Uniform Rectilinear Datasets

    Get PDF
    A great challenge in the volume-rendering field is to achieve high-quality images in an acceptable amount of time. In the area of volume rendering, there is always a trade-off between speed and quality. Applications where only high-quality images are acceptable often use the ray-casting algorithm, but this method is computationally expensive and typically achieves low frame rates. The work presented here is RZSweep, a new volume-rendering algorithm for uniform rectilinear datasets, that gives high-quality images in a reasonable amount of time. In this algorithm a plane sweeps the vertices of the implicit grid of regular datasets in depth order, projecting all the implicit faces incident on each vertex. This algorithm uses the inherent properties of a rectilinear datasets. RZSweep is an object-order, back-toront, direct volume rendering, face projection algorithm for rectilinear datasets using the cell approach. It is a single processor serial algorithm. The simplicity of the algorithm allows the use of the graphics pipeline for hardware-assisted projection, and also, with minimum modification, a version of the algorithm that is graphics-hardware independent. Lighting, color and various opacity transfer functions are implemented for giving realism to the final resulting images. Finally, an image comparison is done between RZSweep and a 3D texture-based method for volume rendering using standard image metrics like Euclidian and geometric differences

    CAVASS: A Computer-Assisted Visualization and Analysis Software System

    Get PDF
    The Medical Image Processing Group at the University of Pennsylvania has been developing (and distributing with source code) medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open-source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available and open source, and it is integrated with toolkits such as Insight Toolkit and Visualization Toolkit. CAVASS runs on Windows, Unix, Linux, and Mac but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive clusters of work stations for more time-consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of 3-dimensional and higher-dimensional medical imagery, so support for digital imaging and communication in medicine data and the efficient implementation of algorithms is given paramount importance

    From measured physical parameters to the haptic feeling of fabric

    Get PDF
    Abstract real-time cloth simulation involves the solution of many computational challenges, particularly in the context of haptic applications, where high frame rates are necessary for obtaining a satisfactory tactile experience. In this paper, we present a real-time cloth simulation system that offers a compromise between a realistic physically-based simulation of fabrics and a haptic application with high requirements in terms of computation speed. We place emphasis on architecture and algorithmic choices for obtaining the best compromise in the context of haptic applications. A first implementation using a haptic device demonstrates the features of the proposed system and leads to the development of new approaches for haptic rendering using the proposed approac

    Quantification of processing artifacts in textile composites

    Get PDF
    One of the greatest difficulties in developing detailed models of the mechanical response of textile reinforced composites is an accurate model of the reinforcing elements. In the case of elastic property prediction, the variation of fiber position may not have a critical role in performance. However, when considering highly localized stress events, such as those associated with cracks and holes, the exact position of the reinforcement probably dominates the failure mode. Models were developed for idealized reinforcements which provide an insight into the local behavior. However, even casual observations of micrographical images reveals that the actual material deviates strongly from the idealized models. Some of the deviations and causes are presented for triaxially braided and three dimensionally woven textile composites. The necessary modeling steps to accommodate these variations are presented with some examples. Some of the ramifications of not accounting for these discrepencies are also addressed

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Modeling and visualization of medical anesthesiology acts

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIn recent years, medical visualization has evolved from simple 2D images on a light board to 3D computarized images. This move enabled doctors to find better ways of planning surgery and to diagnose patients. Although there is a great variety of 3D medical imaging software, it falls short when dealing with anesthesiology acts. Very little anaesthesia related work has been done. As a consequence, doctors and medical students have had little support to study the subject of anesthesia in the human body. We all are aware of how costly can be setting medical experiments, covering not just medical aspects but ethical and financial ones as well. With this work we hope to contribute for having better medical visualization tools in the area of anesthesiology. Doctors and in particular medical students should study anesthesiology acts more efficiently. They should be able to identify better locations to administrate the anesthesia, to study how long does it take for the anesthesia to affect patients, to relate the effect on patients with quantity of anaesthesia provided, etc. In this work, we present a medical visualization prototype with three main functionalities: image pre-processing, segmentation and rendering. The image pre-processing is mainly used to remove noise from images, which were obtained via imaging scanners. In the segmentation stage it is possible to identify relevant anatomical structures using proper segmentation algorithms. As a proof of concept, we focus our attention in the lumbosacral region of the human body, with data acquired via MRI scanners. The segmentation we provide relies mostly in two algorithms: region growing and level sets. The outcome of the segmentation implies the creation of a 3D model of the anatomical structure under analysis. As for the rendering, the 3D models are visualized using the marching cubes algorithm. The software we have developed also supports time-dependent data. Hence, we could represent the anesthesia flowing in the human body. Unfortunately, we were not able to obtain such type of data for testing. But we have used human lung data to validate this functionality

    New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

    Full text link

    Tool for 3D analysis and segmentation of retinal layers in volumetric SD-OCT images

    Get PDF
    With the development of optical coherence tomography in the spectral domain (SD-OCT), it is now possible to quickly acquire large volumes of images. Typically analyzed by a specialist, the processing of the images is quite slow, consisting on the manual marking of features of interest in the retina, including the determination of the position and thickness of its different layers. This process is not consistent, the results are dependent on the clinician perception and do not take advantage of the technology, since the volumetric information that it currently provides is ignored. Therefore is of medical and technological interest to make a three-dimensional and automatic processing of images resulting from OCT technology. Only then we will be able to collect all the information that these images can give us and thus improve the diagnosis and early detection of eye pathologies. In addition to the 3D analysis, it is also important to develop visualization tools for the 3D data. This thesis proposes to apply 3D graphical processing methods to SD-OCT retinal images, in order to segment retinal layers. Also, to analyze the 3D retinal images and the segmentation results, a visualization interface that allows displaying images in 3D and from different perspectives is proposed. The work was based on the use of the Medical Imaging Interaction Toolkit (MITK), which includes other open-source toolkits. For this study a public database of SD-OCT retinal images will be used, containing about 360 volumetric images of healthy and pathological subjects. The software prototype allows the user to interact with the images, apply 3D filters for segmentation and noise reduction and render the volume. The detection of three surfaces of the retina is achieved through intensity-based edge detection methods with a mean error in the overall retina thickness of 3.72 0.3 pixels
    corecore