42 research outputs found
Advanced 3D Rendering : Adaptive Caustic Maps with GPGPU
Graphics researchers have long studied real-time caustic rendering. The state-of-the-art technique Adaptive Caustic Maps provides a novel way to avoid densely sampling photons during a rasterization pass, and instead adaptively emits photons using a deferred shading pass. In this project, we present a variation of adaptive caustic maps for real-time rendering of caustics. Our algorithm is conceptually similar to Adaptive Caustic Maps but has a different implementation based on the general-purpose computing pipeline provided by OpenGL version 4.3. Our approach accelerates the photon splitting process using compute shaders and bypasses various other performance overheads, ultimately speeding up photon generation considerably
Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures
The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
Shallow waters simulation
Dissertação de mestrado integrado em Informatics EngineeringRealistic simulation and rendering of water in real-time is a challenge within the field of computer graphics, as it
is very computationally demanding. A common simulation approach is to reduce the problem from 3D to 2D by
treating the water surface as a 2D heightfield. When simulating 2D fluids, the Shallow Water Equations (SWE)
are often employed, which work under the assumption that the water’s horizontal scale is much greater than it’s
vertical scale.
There are several methods that have been developed or adapted to model the SWE, each with its own advantages
and disadvantages. A common solution is to use grid-based methods where there is the classic approach
of solving the equations in a grid, but also the Lattice-Boltzmann Method (LBM) which originated from the field of
statistical physics. Particle based methods have also been used for modeling the SWE, namely as a variation of
the popular Smoothed-Particle Hydrodynamics (SPH) method.
This thesis presents an implementation for real-time simulation and rendering of a heightfield surface water
volume. The water’s behavior is modeled by a grid-based SWE scheme with an efficient single kernel compute
shader implementation.
When it comes to visualizing the water volume created by the simulation, there are a variety of effects that
can contribute to its realism and provide visual cues for its motion. In particular, When considering shallow water,
there are certain features that can be highlighted, such as the refraction of the ground below and corresponding
light attenuation, and the caustics patterns projected on it.
Using the state produced by the simulation, a water surface mesh is rendered, where set of visual effects are
explored. First, the water’s color is defined as a combination of reflected and transmitted light, while using a Cook-
Torrance Bidirectional Reflectance Distribution Function (BRDF) to describe the Sun’s reflection. These results
are then enhanced by data from a separate pass which provides caustics patterns and improved attenuation
computations. Lastly, small-scale details are added to the surface by applying a normal map generated using
noise.
As part of the work, a thorough evaluation of the developed application is performed, providing a showcase of
the results, insight into some of the parameters and options, and performance benchmarks.Simulação e renderização realista de água em tempo real é um desafio dentro do campo de computação gráfica,
visto que é muito computacionalmente exigente. Uma abordagem comum de simulação é de reduzir o problema
de 3D para 2D ao tratar a superfície da água como um campo de alturas 2D. Ao simular fluidos em 2D, é
frequente usar as equações de águas rasas, que funcionam sobre o pressuposto de que a escala horizontal da
água é muito maior que a sua escala vertical.
Há vários métodos que foram desenvolvidos ou adaptados para modelar as equações de águas rasas, cada
uma com as suas vantagens e desvantagens. Uma solução comum é utilizar métodos baseados em grelhas
onde existe a abordagem clássica de resolver as equações numa grelha, mas também existe o método de Lattice
Boltzmann que originou do campo de física estatística. Métodos baseados em partículas também já foram
usados para modelar as equações de águas rasas, nomeadamente como uma variação do popular método de
SPH.
Esta tese apresenta uma implementação para simulação e renderização em tempo real de um volume de
água com uma superfície de campo de alturas. O comportamento da água é modelado por um esquema de
equações de águas rasas baseado na grelha com uma implementação eficiente de um único kernel de compute
shader.
No que toca a visualizar o volume de água criado pela simulação, existe uma variedade de efeitos que podem
contribuir para o seu realismo e fornecer dicas visuais sobre o seu movimento. Ao considerar águas rasas, existem
certas características que podem ser destacadas, como a refração do terreno por baixo e correspondente
atenuação da luz, e padrões de cáusticas projetados nele.
Usando o estado produzido pela simulação, uma malha da superfície da água é renderizada, onde um conjunto
de efeitos visuais são explorados. Em primeiro lugar, a cor da água é definida como uma combinação de
luz refletida e transmitida, sendo que uma BRDF de Cook-Torrance é usada para descrever a reflexão do Sol.
Estes resultados são depois complementados com dados gerados num passo separado que fornece padrões
de cáusticas e melhora as computações de atenuação. Por fim, detalhes de pequena escala são adicionados à
superfície ao aplicar um mapa de normais gerado com ruído.
Como parte do trabalho desenvolvido, é feita uma avaliação detalhada da aplicação desenvolvida, onde é apresentada
uma demonstração dos resultados, comentários sobre alguns dos parâmetros e opções, e referências
de desempenho
Interactive global illumination on the CPU
Computing realistic physically-based global illumination in real-time remains one
of the major goals in the fields of rendering and visualisation; one that has not
yet been achieved due to its inherent computational complexity. This thesis focuses
on CPU-based interactive global illumination approaches with an aim to
develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant
on spatial and cache coherency to achieve interactive rates which conflicts
with needs of global illumination solutions which require a large number of incoherent
secondary rays to be computed. Methods that reduce the total number of
rays that need to be processed, such as Selective rendering, were investigated to
determine how best they can be utilised.
The impact that selective rendering has on interactive ray tracing was analysed
and quantified and two novel global illumination algorithms were developed,
with the structured methodology used presented as a framework. Adaptive Inter-
leaved Sampling, is a generalisable approach that combines interleaved sampling
with an adaptive approach, which uses efficient component-specific adaptive guidance
methods to drive the computation. Results of up to 11 frames per second
were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of
diffuse interreflections to interactive rates. This approach achieved frame rates
exceeding 9 frames per second for the majority of scenes. Validation of the results
for both approaches showed little perceptual difference when comparing
against a gold-standard path-traced image. Further research into caching led to
the development of a new wait-free data access control mechanism for sharing the
irradiance cache among multiple rendering threads on a shared memory parallel
system. By not serialising accesses to the shared data structure the irradiance
values were shared among all the threads without any overhead or contention,
when reading and writing simultaneously. This new approach achieved efficiencies
between 77% and 92% for 8 threads when calculating static images and animations.
This work demonstrates that, due to the
flexibility of the CPU, CPU-based
algorithms remain a valid and competitive choice for achieving global illumination
interactively, and an alternative to the generally brute-force GPU-centric
algorithms