117 research outputs found
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
The Hierarchical Ray Engine
Due to the success of texture based approaches, ray casting has lately been confined to performing
preprocessing in realtime applications. Though GPU based ray casting implementations outperform the CPU now, they either do not scale well for higher primitive counts, or require the costly
construction of spatial hierarchies. We present an improved algorithm based on the Ray Engine
approach, which builds a hierarchy of rays instead of objects, completely on the graphics card.
Exploiting the coherence between rays when displaying refractive objects or computing caustics,
realtime frame rates are achieved without preprocessing. Thus, the method fills a gap in the
realtime rendering repertoire
The Hierarchical Ray Engine
Due to the success of texture based approaches, ray casting has lately been confined to performing
preprocessing in realtime applications. Though GPU based ray casting implementations outperform the CPU now, they either do not scale well for higher primitive counts, or require the costly
construction of spatial hierarchies. We present an improved algorithm based on the Ray Engine
approach, which builds a hierarchy of rays instead of objects, completely on the graphics card.
Exploiting the coherence between rays when displaying refractive objects or computing caustics,
realtime frame rates are achieved without preprocessing. Thus, the method fills a gap in the
realtime rendering repertoire
Building synthetic simulated environments for configuring and training multi-camera systems for surveillance applications
[EN] Synthetic simulated environments are gaining popularity in the Deep Learning Era, as they can alleviate the
effort and cost of two critical tasks to build multi-camera systems for surveillance applications: setting up
the camera system to cover the use cases and generating the labeled dataset to train the required Deep Neural
Networks (DNNs). However, there are no simulated environments ready to solve them for all kind of scenarios
and use cases. Typically, ‘ad hoc’ environments are built, which cannot be easily applied to other contexts.
In this work we present a methodology to build synthetic simulated environments with sufficient generality to
be usable in different contexts, with little effort. Our methodology tackles the challenges of the appropriate
parameterization of scene configurations, the strategies to generate randomly a wide and balanced range of
situations of interest for training DNNs with synthetic data, and the quick image capturing from virtual cameras
considering the rendering bottlenecks. We show a practical implementation example for the detection of
incorrectly placed luggage in aircraft cabins, including the qualitative and quantitative analysis of the data
generation process and its influence in a DNN training, and the required modifications to adapt it to other
surveillance contexts.This work has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 865162, SmaCS (https://www.smacs.eu/
Real-Time Global Illumination for VR Applications
Real-time global illumination in VR systems enhances scene realism by incorporating soft shadows, reflections of objects in the scene, and color bleeding. The Virtual Light Field (VLF) method enables real-time global illumination rendering in VR. The VLF has been integrated with the Extreme VR system for realtime GPU-based rendering in a Cave Automatic Virtual Environment
Interactive display of isosurfaces with global illumination
Journal ArticleAbstract-In many applications, volumetric data sets are examined by displaying isosurfaces, surfaces where the data, or some function of the data, takes on a given value. Interactive applications typically use local lighting models to render such surfaces. This work introduces a method to precompute or lazily compute global illumination to improve interactive isosurface renderings. The precomputed illumination resides in a separate volume and includes direct light, shadows, and interreflections. Using this volume, interactive globally illuminated renderings of isosurfaces become feasible while still allowing dynamic manipulation of lighting, viewpoint and isovalue
A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination
The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous recomputation based rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multi-bounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the low-frequency nature of indirect light. We use a high-quality representation for the BRDFs at the first bounce from the eye, and lower-frequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including color-bleeding, near-field object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the high-dimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination
Towards Predictive Rendering in Virtual Reality
The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
- …