103 research outputs found
An approximation to multiple scattering in volumetric illumination towards real-time rendering
Many volumetric illumination techniques for volume rendering were developed through out the years. However, there are still many constraints regarding the computation of multiple scattering path tracing in real-time applications due to its natural complexity and scale. Path tracing with multiple scattering support can produce physically correct results but suffers from noise and low convergence rates. This work proposes a new real-time algorithm to approximate multiple scattering, usually only available in offline rendering production, to real-time. Our approach explores the human perceptual system to speed up computation. Given two images, we use a CIE metric stating that the two will be perceived as similar to the human eye if the Euclidean distance between the two images in CIELAB color space is smaller than 2.3. Hence, we use this premise to guide our in vestigations when changing ray and bounce parameters in our renderer. Our results show that we can reduce from 105 to 104 Samples Per Pixel (SPP) with a negligible perceptual difference between both results, allowing us to cut rendering times by 10 whenever we divide SPP by 10. Similarly, we can reduce the number of bounces from 1000 to 100 with a negligible perceptual difference while reducing rendering times by almost half. We also propose a new algorithm in real-time, Lobe Estimator, that approximates these behaviors and parameters while performing twice as faster as the classic Ray Marching technique.Muitas técnicas de ilmuninação volumétrica foram desenvolvidas ao longo dos anos. Entretanto, ainda há muitas restrições na computação de multiple scattering em aplicações de tempo real usando path tracing, devido à sua complexidade e escala. Path tracing com suporte a multiple scattering é capaz de produzir resultados fisicamente corretos, mas sofre de ruídos e baixa taixa de convergência. Portanto, este trabalho propõe um novo algoritmo de tempo real para aproximar multiple scattering, usado em offline rendering. Nossa abordagem irá explorar o sistema perceptual visual humano para acelerar a computação. A partir de duas imagens, nós usamos a métrica da CIE que afirma que duas imagens são percebidas como similar ao olho humano se a distância Euclidiana das duas imagens no espaço de cores CIELAB for menor que 2.3. Dessa forma, nós usamos essa premissa para guiar nossas investigações quando alterando os parâmetros de Samples Per Pixel (SPP) e bounces nos renderizadores. Nossos resultados mostram que podemos redu zir de 105 para 104 Samples Per Pixel (SPP) com uma diferença perceptual negligenciável entre ambos paramêtros, permitindo reduzir o tempo de renderização por 10 a cada vez que dividimos o SPP por 10. Similarmente, nós podemos reduzir o número de bounces de 1000 para 100 com uma diferença perceptual negligenciável, enquanto reduzindo o tempo de renderização por quase metade. Nós também propusemos um novo algoritmo em tempo real, Lobe Estimator, que permite aproximar esses comportamentos e paramê tros enquanto permformando duas vezes mais rápido que o clássico Ray Marching
Perceptual effects of volumetric shading models in stereoscopic desktop-based environments
Throughout the years, many shading techniques have been developed to improve the conveying of information in Volume Visualization. Some of these methods, usually referred to as realistic, are supposed to provide better cues for the understanding of volume data sets. While shading approaches are heavily exploited in traditional monoscopic
setups, no previous study has analyzed the effect of these techniques in Virtual Reality. To further explore the influence of shading on the understanding of volume data in such environments, we carried out a user study in a desktop-based stereoscopic setup. The goals of the study were to investigate the impact of well-known shading approaches and the influence of real illumination on depth perception. Participants had to perform three different perceptual tasks when
exposed to static visual stimuli. 45 participants took part in the study, giving us 1152 trials for each task. Results show that advanced shading techniques improve depth perception in stereoscopic volume visualization. As well, external lighting does not affect depth perception when these shading methods are applied. As a result, we derive some guidelines that may help the researchers when selecting illumination models for stereoscopic rendering.Peer ReviewedPostprint (author's final draft
Visually accurate multi-field weather visualization
Journal ArticleWeather visualization is a difficult problem because it comprises volumetric multi-field data and traditional surface-based approaches obscure details of the complex three-dimensional structure of cloud dynamics. Therefore, visually accurate volumetric multi-field visualization of storm scale and cloud scale data is needed to effectively and efficiently communicate vital information to weather forecasters, improving storm forecasting, atmospheric dynamics models, and weather spotter training. We have developed a new approach to multi-field visualization that uses field specific, physically-based opacity, transmission, and lighting calculations per-field for the accurate visualization of storm and cloud scale weather data. Our approach extends traditional transfer function approaches to multi-field data and to volumetric illumination and scattering
Recommended from our members
Multiple-plane particle image velocimetry using a light-field camera
Planar velocity fields in flows are determined simultaneously on parallel measurement planes by means of an in-house manufactured light-field camera. The planes are defined by illuminating light sheets with constant spacing. Particle positions are reconstructed from a single 2D recording taken by a CMOS-camera equipped with a high-quality doublet lens array. The fast refocusing algorithm is based on synthetic-aperture particle image velocimetry (SAPIV). The reconstruction quality is tested via ray-tracing of synthetically generated particle fields. The introduced single-camera SAPIV is applied to a convective flow within a measurement volume of 30 x 30 x 50 mm³
Inviwo -- A Visualization System with Usage Abstraction Levels
The complexity of today's visualization applications demands specific
visualization systems tailored for the development of these applications.
Frequently, such systems utilize levels of abstraction to improve the
application development process, for instance by providing a data flow network
editor. Unfortunately, these abstractions result in several issues, which need
to be circumvented through an abstraction-centered system design. Often, a high
level of abstraction hides low level details, which makes it difficult to
directly access the underlying computing platform, which would be important to
achieve an optimal performance. Therefore, we propose a layer structure
developed for modern and sustainable visualization systems allowing developers
to interact with all contained abstraction levels. We refer to this interaction
capabilities as usage abstraction levels, since we target application
developers with various levels of experience. We formulate the requirements for
such a system, derive the desired architecture, and present how the concepts
have been exemplary realized within the Inviwo visualization system.
Furthermore, we address several specific challenges that arise during the
realization of such a layered architecture, such as communication between
different computing platforms, performance centered encapsulation, as well as
layer-independent development by supporting cross layer documentation and
debugging capabilities
- …