44 research outputs found
Efficient Many-Light Rendering of Scenes with Participating Media
We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media
Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX
A vital component of photo-realistic image synthesis is the simulation of
indirect diffuse reflections, which still remain a quintessential hurdle that
modern rendering engines struggle to overcome. Real-time applications typically
pre-generate diffuse lighting information offline using radiosity to avoid
performing costly computations at run-time. In this thesis we present a variant
of progressive refinement radiosity that utilizes Nvidia's novel RTX technology
to accelerate the process of form-factor computation without compromising on
visual fidelity. Through a modern implementation built on DirectX 12 we
demonstrate that offloading radiosity's visibility component to RT cores
significantly improves the lightmap generation process and potentially propels
it into the domain of real-time.Comment: 114 page
Interactive ray tracing of massive and deformable models
Ray tracing is a fundamental algorithm used for many applications such as computer graphics, geometric simulation, collision detection and line-of-sight computation. Even though the performance of ray tracing algorithms scales with the model complexity, the high memory requirements and the use of static hierarchical structures pose problems with massive models and dynamic data-sets. We present several approaches to address these problems based on new acceleration structures and traversal algorithms. We introduce a compact representation for storing the model and hierarchy while ray tracing triangle meshes that can reduce the memory footprint by up to 80%, while maintaining high performance. As a result, can ray trace massive models with hundreds of millions of triangles on workstations with a few gigabytes of memory. We also show how to use bounding volume hierarchies for ray tracing complex models with interactive performance. In order to handle dynamic scenes, we use refitting algorithms and also present highly-parallel GPU-based algorithms to reconstruct the hierarchies. In practice, our method can construct hierarchies for models with hundreds of thousands of triangles at interactive speeds. Finally, we demonstrate several applications that are enabled by these algorithms. Using deformable BVH and fast data parallel techniques, we introduce a geometric sound propagation algorithm that can run on complex deformable scenes interactively and orders of magnitude faster than comparable previous approaches. In addition, we also use these hierarchical algorithms for fast collision detection between deformable models and GPU rendering of shadows on massive models by employing our compact representations for hybrid ray tracing and rasterization
A graphics processing unit based method for dynamic real-time global illumination
Real-time realistic image synthesis for virtual environments has been one of the most actively researched
areas in computer graphics for over a decade. Images that display physically correct illumination of an
environment can be simulated by evaluating a multi-dimensional integral equation, called the rendering
equation, over the surfaces of the environment. Many global illumination algorithms such as pathtracing,
photon mapping and distributed ray-tracing can produce realistic images but are generally unable
to cope with dynamic lighting and objects at interactive rates. It still remains one of most challenging
problems to simulate physically correctly illuminated dynamic environments without a substantial preprocessing
step.
In this thesis we present a rendering system for dynamic environments by implementing a customized
rasterizer for global illumination entirely on the graphics hardware, the Graphical Processing
Unit. Our research focuses on a parameterization of discrete visibility field for efficient indirect illumination
computation. In order to generate the visibility field, we propose a CUDA-based (Compute
Unified Device Architecture) rasterizer which builds Layered Hit Buffers (LHB) by rasterizing polygons
into multi-layered structural buffers in parallel. The LHB provides a fast visibility function for any direction
at any point. We propose a cone approximation solution to resolve an aliasing problem due to
limited directional discretization. We also demonstrate how to remove structure noises by adapting an
interleaved sampling scheme and discontinuity buffer. We show that a gathering method amortized with
a multi-level Quasi Mont Carlo method can evaluate the rendering equation in real-time.
The method can realize real-time walk-through of a complex virtual environment that has a mixture
of diffuse and glossy reflection, computing multiple indirect bounces on the fly. We show that our method
is capable of simulating fully dynamic environments including changes of view, materials, lighting and
objects at interactive rates on commodity level graphics hardware
Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures
The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications
Efficient shadow map filtering
Schatten liefern dem menschlichen Auge wichtige Informationen, um die räumlichen Beziehungen in der Umgebung in der wir leben wahrzunehmen. Sie sind somit ein unverzichtbarer Bestandteil der realistischen Bildsynthese. Leider ist die Sichtbarkeitsberechnung ein rechenintensiver Prozess. Bildbasierte Methoden, wie zum Beispiel Shadow Maps, verhalten sich positiv gegenüber einer wachsenden Szenenkomplexität, produzieren aber Artefakte sowohl in der räumlichen, als auch in der temporalen Domäne, da sie nicht wie herkömmliche Bilder gefiltert werden können. Diese Dissertation präsentiert neue Echtzeit-Schattenverfahren die das effiziente Filtern von Shadow Maps ermöglichen, um die Bildqualität und das Kohärenzverhalten zu verbessern. Hierzu formulieren wir den Schattentest als eine Summe von Produkten, bei der die beiden Parameter der Schattenfunktion separiert werden. Shadow Maps werden dann in sogenannte Basis-Bilder transformiert, die im Gegensatz zu Shadow Maps linear gefiltert werden können. Die gefilterten Basis-Bilder sind äquivalent zu einem vorgefilterten Schattentest und werden verwendet, um geglättete Schattenkanten und realistische weiche Schatten zu berechnen.Shadows provide the human visual system with important cues to sense spatial relationships in the environment we live in. As such they are an indispensable part of realistic computerenerated imagery. Unfortunately, visibility determination is computationally expensive. Image-based simplifications to the problem such as Shadow Maps perform well with increased scene complexity but produce artifacts both in the spatial and temporal domain because they lack efficient filtering support. This dissertation presents novel real-time shadow algorithms to enable efficient filtering of Shadow Maps in order to increase the image quality and overall coherence characteristics. This is achieved by expressing the shadow test as a sum of products where the parameters of the shadow test are separated from each other. Ordinary Shadow Maps are then subject to a transformation into new so called basis-images which can, as opposed to Shadow Maps, be linearly filtered. The convolved basis images are equivalent to a pre-filtered shadow test and used to reconstruct anti-aliased as well as physically plausible all-frequency shadows
Recommended from our members
Faster Than Real-Time GPGPU Radiation Pressure Modeling Methods
Solar radiation pressure (SRP) is a significant contributing dynamic force on spacecraft in all orbit regimes. Predicting, accommodating, and either leveraging or canceling its effect, is paramount to effective orbit determination, maneuver and mission design. As a result spacecraft numerical simulation requires computational models which provide the facility to model SRP with sufficient accuracy. However, typically the computationally intense nature of performing high-fidelity SRP evaluations has limited such evaluations to being an offline computation which generates lookup data. Precomputation limits the ability for a spacecraft dynamic simulation to accommodate the myriad time varying changes which occur to the spacecraft state during a mission.
In the past decade the computer graphics industry has driven the development of highly parallel graphics processing units (GPU) capable of performing many thousands of floating point operations per second. General purpose GPU programming (GPGPU) has been leveraged particularly in Engineering and the Sciences where the high computational power of parallel GPU hardware presents the opportunity for significant increases in the size and dimension of computational problems now manageable on personal computers.
This dissertation presents two modeling approaches which take advantage of the GPGPU aspect of commodity GPU hardware. The first contribution is a modeling approach which utilizes the vector graphics application programming interface (API) Open Graphics Library (OpenGL) and the GPGPU computing API Open Computing Language to develop a high geometric fidelity SRP modeling approach. The OpenGL-CL modeling approach computes SRP induced force and torque across a detailed spacecraft mesh model. The method utilizes the OpenGL-OpenCL shared context to facilitate modeling data between the two APIs. The OpenGL render pipeline is manipulated to render the sun-frame projected surface of the spacecraft into OpenGL Texture data objects. A custom OpenCL parallel reduction kernel is developed which subsequently computes the SRP force and torque across the spacecraft rendered into the OpenGL Textures. The method presents faster than real time computation speeds while accommodating spacecraft meshes with many thousands of vertices, arbitrary articulated components and detailed spacecraft material optical parameters.
The second contribution is a GPU based parallel ray tracing modeling approach which ex- hibits faster than real time evaluation speeds. Techniques and algorithms from the computer graphics discipline are used to develop and implement a method which computes SRP force and torque across a detailed spacecraft triangulated mesh model. Efficient data structures such as bounding volume hierarchy (BVH) acceleration support a minimization of computational burden by reducing the ray-surface intersection search space. Accurate ray reflections are computed for complex materials by applying a Quasi-Monte Carlo integration method and importance sampling. Complex material bidirectional reflectance distribution functions (BRDF) are implemented with as both, ideal mirror-like specular and Lambertian diffuse, and as microfacet BRDF models. Arbitrary spacecraft articulation are accommodated at run time with no appreciable reduction in computational speed.
Both SRP models utilize the latent computing power of the GPU which is exists in the large majority of consumer grade personal computing systems. Further access to latent computing power is enabled by the development of a software simulation communication middleware called Black Lion (BL). The third contribution of this thesis is the description of a novel software architecture and the design principles applied to the development of the BL software. Black Lion enables the integration of multiple local or distributed heterogeneous applications never intended to run in a cooperative settings. It is shown that BL enables access to more powerful latent personal computing resources by creating a means to transparently facilitate distributed simulation across multiple simulation nodes and computers.
Finally, this dissertation demonstrates the utility of both modeling methods by their applica- tions in two case studies. Firstly, the high-fidelity SRP effects are computed for an ongoing asteroid sample return mission. Agreement between the OpenGL-CL methods is demonstrated. Both SRP modeling approaches make significant use of pre and post launch engineering data. The utility of direct access to a model’s physical parameters is demonstrated in an analysis of contributors to possible error between modeled and estimated SRP accelerations. Secondly, capability of fast computational speed paired with high geometric resolution, of both OpenGL-CL and ray tracing methods, is demonstrated. Each method is employed in the simulation and long-term propagation of realistic multi-layer insulation (MLI) debris object mesh models and the effect of departing from the typical flat-plate MLI model is investigated.</p
Real-Time deep image rendering and order independent transparency
In computer graphics some operations can be performed in either object space or image space. Image space computation can be advantageous, especially with the high parallelism of GPUs, improving speed, accuracy and ease of implementation. For many image space techniques the information contained in regular 2D images is limiting. Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what we call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques. This thesis investigates deep images and their growing use in real-time image space applications. A focus is new techniques for improving fundamental operation performance, including construction, storage, fast fragment sorting and sampling. A core and driving application is order-independent transparency (OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering we look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach. Using these ideas a more computationally complex application is investigated &mdash; image based depth of field (DoF). Deep images are used to provide partial occlusion, and in particular a form of deep image mipmapping allows a fast approximate defocus blur of up to full screen size