7 research outputs found
Virtual light fields for global illumination in computer graphics
This thesis presents novel techniques for the generation and real-time rendering of globally illuminated
environments with surfaces described by arbitrary materials. Real-time rendering of globally illuminated
virtual environments has for a long time been an elusive goal. Many techniques have been developed
which can compute still images with full global illumination and this is still an area of active flourishing
research. Other techniques have only dealt with certain aspects of global illumination in order to speed
up computation and thus rendering. These include radiosity, ray-tracing and hybrid methods. Radiosity
due to its view independent nature can easily be rendered in real-time after pre-computing and storing
the energy equilibrium. Ray-tracing however is view-dependent and requires substantial computational
resources in order to run in real-time.
Attempts at providing full global illumination at interactive rates include caching methods, fast rendering
from photon maps, light fields, brute force ray-tracing and GPU accelerated methods. Currently,
these methods either only apply to special cases, are incomplete exhibiting poor image quality and/or
scale badly such that only modest scenes can be rendered in real-time with current hardware.
The techniques developed in this thesis extend upon earlier research and provide a novel, comprehensive
framework for storing global illumination in a data structure - the Virtual Light Field - that is
suitable for real-time rendering. The techniques trade off rapid rendering for memory usage and precompute
time. The main weaknesses of the VLF method are targeted in this thesis. It is the expensive
pre-compute stage with best-case O(N^2) performance, where N is the number of faces, which make the
light propagation unpractical for all but simple scenes. This is analysed and greatly superior alternatives
are presented and evaluated in terms of efficiency and error. Several orders of magnitude improvement
in computational efficiency is achieved over the original VLF method.
A novel propagation algorithm running entirely on the Graphics Processing Unit (GPU) is presented.
It is incremental in that it can resolve visibility along a set of parallel rays in O(N) time and can
produce a virtual light field for a moderately complex scene (tens of thousands of faces), with complex illumination
stored in millions of elements, in minutes and for simple scenes in seconds. It is approximate
but gracefully converges to a correct solution; a linear increase in resolution results in a linear increase in
computation time. Finally a GPU rendering technique is presented which can render from Virtual Light
Fields at real-time frame rates in high resolution VR presentation devices such as the CAVETM
Real-Time Global Illumination for VR Applications
Real-time global illumination in VR systems enhances scene realism by incorporating soft shadows, reflections of objects in the scene, and color bleeding. The Virtual Light Field (VLF) method enables real-time global illumination rendering in VR. The VLF has been integrated with the Extreme VR system for realtime GPU-based rendering in a Cave Automatic Virtual Environment
A graphics processing unit based method for dynamic real-time global illumination
Real-time realistic image synthesis for virtual environments has been one of the most actively researched
areas in computer graphics for over a decade. Images that display physically correct illumination of an
environment can be simulated by evaluating a multi-dimensional integral equation, called the rendering
equation, over the surfaces of the environment. Many global illumination algorithms such as pathtracing,
photon mapping and distributed ray-tracing can produce realistic images but are generally unable
to cope with dynamic lighting and objects at interactive rates. It still remains one of most challenging
problems to simulate physically correctly illuminated dynamic environments without a substantial preprocessing
step.
In this thesis we present a rendering system for dynamic environments by implementing a customized
rasterizer for global illumination entirely on the graphics hardware, the Graphical Processing
Unit. Our research focuses on a parameterization of discrete visibility field for efficient indirect illumination
computation. In order to generate the visibility field, we propose a CUDA-based (Compute
Unified Device Architecture) rasterizer which builds Layered Hit Buffers (LHB) by rasterizing polygons
into multi-layered structural buffers in parallel. The LHB provides a fast visibility function for any direction
at any point. We propose a cone approximation solution to resolve an aliasing problem due to
limited directional discretization. We also demonstrate how to remove structure noises by adapting an
interleaved sampling scheme and discontinuity buffer. We show that a gathering method amortized with
a multi-level Quasi Mont Carlo method can evaluate the rendering equation in real-time.
The method can realize real-time walk-through of a complex virtual environment that has a mixture
of diffuse and glossy reflection, computing multiple indirect bounces on the fly. We show that our method
is capable of simulating fully dynamic environments including changes of view, materials, lighting and
objects at interactive rates on commodity level graphics hardware