96 research outputs found

    An approximation to multiple scattering in volumetric illumination towards real-time rendering

    Get PDF
    Many volumetric illumination techniques for volume rendering were developed through out the years. However, there are still many constraints regarding the computation of multiple scattering path tracing in real-time applications due to its natural complexity and scale. Path tracing with multiple scattering support can produce physically correct results but suffers from noise and low convergence rates. This work proposes a new real-time algorithm to approximate multiple scattering, usually only available in offline rendering production, to real-time. Our approach explores the human perceptual system to speed up computation. Given two images, we use a CIE metric stating that the two will be perceived as similar to the human eye if the Euclidean distance between the two images in CIELAB color space is smaller than 2.3. Hence, we use this premise to guide our in vestigations when changing ray and bounce parameters in our renderer. Our results show that we can reduce from 105 to 104 Samples Per Pixel (SPP) with a negligible perceptual difference between both results, allowing us to cut rendering times by 10 whenever we divide SPP by 10. Similarly, we can reduce the number of bounces from 1000 to 100 with a negligible perceptual difference while reducing rendering times by almost half. We also propose a new algorithm in real-time, Lobe Estimator, that approximates these behaviors and parameters while performing twice as faster as the classic Ray Marching technique.Muitas técnicas de ilmuninação volumétrica foram desenvolvidas ao longo dos anos. Entretanto, ainda há muitas restrições na computação de multiple scattering em aplicações de tempo real usando path tracing, devido à sua complexidade e escala. Path tracing com suporte a multiple scattering é capaz de produzir resultados fisicamente corretos, mas sofre de ruídos e baixa taixa de convergência. Portanto, este trabalho propõe um novo algoritmo de tempo real para aproximar multiple scattering, usado em offline rendering. Nossa abordagem irá explorar o sistema perceptual visual humano para acelerar a computação. A partir de duas imagens, nós usamos a métrica da CIE que afirma que duas imagens são percebidas como similar ao olho humano se a distância Euclidiana das duas imagens no espaço de cores CIELAB for menor que 2.3. Dessa forma, nós usamos essa premissa para guiar nossas investigações quando alterando os parâmetros de Samples Per Pixel (SPP) e bounces nos renderizadores. Nossos resultados mostram que podemos redu zir de 105 para 104 Samples Per Pixel (SPP) com uma diferença perceptual negligenciável entre ambos paramêtros, permitindo reduzir o tempo de renderização por 10 a cada vez que dividimos o SPP por 10. Similarmente, nós podemos reduzir o número de bounces de 1000 para 100 com uma diferença perceptual negligenciável, enquanto reduzindo o tempo de renderização por quase metade. Nós também propusemos um novo algoritmo em tempo real, Lobe Estimator, que permite aproximar esses comportamentos e paramê tros enquanto permformando duas vezes mais rápido que o clássico Ray Marching

    Combined surface and volumetric occlusion shading

    Get PDF
    Journal ArticleIn this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry

    Using advanced illumination techniques to enhance realism and perception of volume visualizations

    Full text link
    Die Nutzung volumetrischer Daten ist in vergangenen Jahren immer häufiger geworden. Die Erzeugung von aussagekräfigen und verständlichen Bildern aus diesen Daten ist daher wichtiger denn je. Die Simulation von Beleuchtungsphänomenen ist eine Möglichkeit, die Wahrnehmung und den Realismus solcher Bilder zu verbessern. Diese Dissertation beschäftigt sich mit der Effektivität von existierenden Modellen zur Volumenillumination und präsentiert einige neue Techniken und Anwendungen für diesen Bereich der Computergrafik. Es werden Methoden vorgestellt, um die Interaktion von Licht und Material im Kontext von Volumendaten zu simulieren. Weiterhin wird eine umfangreichenNutzerstudie präsentiert, deren Ziel es war, den Einfluss von verschiedenen existierenden Modellen zur Volumenillumination auf den Betrachter zu untersuchen. Abschließend wird eine Anwendung zur Darstellung und visuellen Analyse von Hirndaten präsentiert, in der Volumenillumination neben weiteren neuartigen Visualisierungen zum Einsatz kommt.<br

    Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    Get PDF
    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license

    Performance and quality analysis of convolution-based volume illumination

    Get PDF
    Convolution-based techniques for volume rendering are among the fastest in the on-the-fly volumetric illumination category. Such methods, however, are still considerably slower than conventional local illumination techniques. In this paper we describe how to adapt two commonly used strategies for reducing aliasing artifacts, namely pre-integration and supersampling, to such techniques. These strategies can help reduce the sampling rate of the lighting information (thus the number of convolutions), bringing considerable performance benefits. We present a comparative analysis of their effectiveness in offering performance improvements. We also analyze the (negligible) differences they introduce when comparing their output to the reference method. These strategies can be highly beneficial in setups where direct volume rendering of continuously streaming data is desired and continuous recomputation of full lighting information is too expensive, or where memory constraints make it preferable not to keep additional precomputed volumetric data in memory. In such situations these strategies make single pass, convolution-based volumetric illumination models viable for a broader range of applications, and this paper provides practical guidelines for using and tuning such strategies to specific use cases

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future
    • …
    corecore