211 research outputs found

    Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    Get PDF
    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license

    Using advanced illumination techniques to enhance realism and perception of volume visualizations

    Full text link
    Die Nutzung volumetrischer Daten ist in vergangenen Jahren immer häufiger geworden. Die Erzeugung von aussagekräfigen und verständlichen Bildern aus diesen Daten ist daher wichtiger denn je. Die Simulation von Beleuchtungsphänomenen ist eine Möglichkeit, die Wahrnehmung und den Realismus solcher Bilder zu verbessern. Diese Dissertation beschäftigt sich mit der Effektivität von existierenden Modellen zur Volumenillumination und präsentiert einige neue Techniken und Anwendungen für diesen Bereich der Computergrafik. Es werden Methoden vorgestellt, um die Interaktion von Licht und Material im Kontext von Volumendaten zu simulieren. Weiterhin wird eine umfangreichenNutzerstudie präsentiert, deren Ziel es war, den Einfluss von verschiedenen existierenden Modellen zur Volumenillumination auf den Betrachter zu untersuchen. Abschließend wird eine Anwendung zur Darstellung und visuellen Analyse von Hirndaten präsentiert, in der Volumenillumination neben weiteren neuartigen Visualisierungen zum Einsatz kommt.<br

    An approximation to multiple scattering in volumetric illumination towards real-time rendering

    Get PDF
    Many volumetric illumination techniques for volume rendering were developed through out the years. However, there are still many constraints regarding the computation of multiple scattering path tracing in real-time applications due to its natural complexity and scale. Path tracing with multiple scattering support can produce physically correct results but suffers from noise and low convergence rates. This work proposes a new real-time algorithm to approximate multiple scattering, usually only available in offline rendering production, to real-time. Our approach explores the human perceptual system to speed up computation. Given two images, we use a CIE metric stating that the two will be perceived as similar to the human eye if the Euclidean distance between the two images in CIELAB color space is smaller than 2.3. Hence, we use this premise to guide our in vestigations when changing ray and bounce parameters in our renderer. Our results show that we can reduce from 105 to 104 Samples Per Pixel (SPP) with a negligible perceptual difference between both results, allowing us to cut rendering times by 10 whenever we divide SPP by 10. Similarly, we can reduce the number of bounces from 1000 to 100 with a negligible perceptual difference while reducing rendering times by almost half. We also propose a new algorithm in real-time, Lobe Estimator, that approximates these behaviors and parameters while performing twice as faster as the classic Ray Marching technique.Muitas técnicas de ilmuninação volumétrica foram desenvolvidas ao longo dos anos. Entretanto, ainda há muitas restrições na computação de multiple scattering em aplicações de tempo real usando path tracing, devido à sua complexidade e escala. Path tracing com suporte a multiple scattering é capaz de produzir resultados fisicamente corretos, mas sofre de ruídos e baixa taixa de convergência. Portanto, este trabalho propõe um novo algoritmo de tempo real para aproximar multiple scattering, usado em offline rendering. Nossa abordagem irá explorar o sistema perceptual visual humano para acelerar a computação. A partir de duas imagens, nós usamos a métrica da CIE que afirma que duas imagens são percebidas como similar ao olho humano se a distância Euclidiana das duas imagens no espaço de cores CIELAB for menor que 2.3. Dessa forma, nós usamos essa premissa para guiar nossas investigações quando alterando os parâmetros de Samples Per Pixel (SPP) e bounces nos renderizadores. Nossos resultados mostram que podemos redu zir de 105 para 104 Samples Per Pixel (SPP) com uma diferença perceptual negligenciável entre ambos paramêtros, permitindo reduzir o tempo de renderização por 10 a cada vez que dividimos o SPP por 10. Similarmente, nós podemos reduzir o número de bounces de 1000 para 100 com uma diferença perceptual negligenciável, enquanto reduzindo o tempo de renderização por quase metade. Nós também propusemos um novo algoritmo em tempo real, Lobe Estimator, que permite aproximar esses comportamentos e paramê tros enquanto permformando duas vezes mais rápido que o clássico Ray Marching

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos científicos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anatómicos, es importante generar imágenes precisas, ya que una mala interpretación de las mismas puede producir errores graves en el diagnóstico de enfermedades o en la planificación de operaciones quirúrgicas. En estos casos, mejorar la percepción de las zonas de interés, facilita la comprensión de la información inherente a los datos. Durante décadas, los investigadores se han centrado en el desarrollo de técnicas para mejorar la visualización de datos volumétricos. Por ejemplo, los métodos que permiten definir buenas funciones de transferencia son clave, ya que éstas determinan cómo se clasifican los materiales. Otros ejemplos son las técnicas que simulan modelos de iluminación realista, que permiten percibir mejor la distribución espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracción necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepción de los elementos del volumen, ya sea modificando el modelo de iluminación aplicado en la visualización, o simulando efectos ilustrativos. Aprovechando la capacidad de cálculo de los nuevos procesadores gráficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepción de detalles locales, proponemos modificar el modelo de iluminación utilizando una conocida herramienta de procesado de imágenes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. También se presentan diferentes técnicas para mejorar la percepción de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminación teniendo en cuenta la oclusión ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusión del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusión en cada vóxel. Además de estas dos técnicas, también se propone mejorar la percepción espacial y de la profundidad de ciertas estructuras mediante la generación de halos. La técnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ningún tipo de información contextual. Para mejorar la percepción de la profundidad, proponemos una nueva técnica basada en cambiar la forma en la que se acumula la intensidad en MIP. También se describe un esquema de color para mejorar la percepción espacial de los elementos visualizados. La última contribución de la tesis es una herramienta de manipulación directa de los datos, que permite preservar la información contextual cuando se realizan cortes en el modelo de volumen. Basada en técnicas ilustrativas tradicionales, esta técnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de interés se visualizan a diferentes alturas sobre la sección, lo que permite al observador percibirlas correctamente

    Perceptual effects of volumetric shading models in stereoscopic desktop-based environments

    Get PDF
    Throughout the years, many shading techniques have been developed to improve the conveying of information in Volume Visualization. Some of these methods, usually referred to as realistic, are supposed to provide better cues for the understanding of volume data sets. While shading approaches are heavily exploited in traditional monoscopic setups, no previous study has analyzed the effect of these techniques in Virtual Reality. To further explore the influence of shading on the understanding of volume data in such environments, we carried out a user study in a desktop-based stereoscopic setup. The goals of the study were to investigate the impact of well-known shading approaches and the influence of real illumination on depth perception. Participants had to perform three different perceptual tasks when exposed to static visual stimuli. 45 participants took part in the study, giving us 1152 trials for each task. Results show that advanced shading techniques improve depth perception in stereoscopic volume visualization. As well, external lighting does not affect depth perception when these shading methods are applied. As a result, we derive some guidelines that may help the researchers when selecting illumination models for stereoscopic rendering.Peer ReviewedPostprint (author's final draft

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now
    corecore