1,956 research outputs found
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Fragment-History Volumes
Hardware-based triangle rasterization is still the prevalent method for
generating images at real-time interactive frame rates. With the availability
of a programmable graphics pipeline a large variety of techniques are supported
for evaluating lighting and material properties of fragments. However, these
techniques are usually restricted to evaluating local lighting and material
effects. In addition, view-point changes require the complete processing of
scene data to generate appropriate images. Reusing already rendered data in the
frame buffer for a given view point by warping for a new viewpoint increases
navigation fidelity at the expense of introducing artifacts for fragments
previously hidden from the viewer.
We present fragment-history volumes (FHV), a rendering technique based on a
sparse, discretized representation of a 3d scene that emerges from recording
all fragments that pass the rasterization stage in the graphics pipeline. These
fragments are stored into per-pixel or per-octant lists for further processing;
essentially creating an A-buffer. FHVs using per-octant fragment lists are view
independent and allow fast resampling for image generation as well as for using
more sophisticated approaches to evaluate material and lighting properties,
eventually enabling global-illumination evaluation in the standard graphics
pipeline available on current hardware.
We show how FHVs are stored on the GPU in several ways, how they are created,
and how they can be used for image generation at high rates. We discuss results
for different usage scenarios, variations of the technique, and some
limitations
Efficient Global Illumination for Morphable Models
We propose an efficient self-shadowing illumination model for Morphable Models. Simulating self-shadowing with ray casting is computationally expensive which makes them impractical in Analysis-by-Synthesis methods for object reconstruction from single images. Therefore, we propose to learn self-shadowing for Morphable Model parameters directly with a linear model. Radiance transfer functions are a powerful way to represent self-shadowing used within the precomputed radiance transfer framework (PRT). We build on PRT to render deforming objects with self-shadowing at interactive frame rates. It can be illuminated efficiently by environment maps represented with spherical harmonics. The result is an efficient global illumination method for Morphable Models, exploiting an approximated radiance transfer. We apply the method to fitting Morphable Model parameters to a single image of a face and demonstrate that considering self-shadowing improves shape reconstruction
Ambient occlusion and shadows for molecular graphics
Computer based visualisations of molecules have been produced as early as the 1950s to aid researchers in their understanding of biomolecular structures. An important consideration for Molecular Graphics software is the ability to visualise the 3D structure of the molecule in a clear manner.
Recent advancements in computer graphics have led to improved rendering capabilities of the visualisation tools. The capabilities of current shading languages allow the inclusion of advanced graphic effects such as ambient occlusion and shadows that
greatly improve the comprehension of the 3D shapes of the molecules.
This thesis focuses on finding improved solutions to the real time rendering of Molecular Graphics on modern day computers. The methods of calculating ambient occlusion and both hard and soft shadows are examined and implemented to give the user a more complete experience when navigating large molecular structures
Acceleration Techniques for Photo Realistic Computer Generated Integral Images
The research work presented in this thesis has approached the task of accelerating the
generation of photo-realistic integral images produced by integral ray tracing.
Ray tracing algorithm is a computationally exhaustive algorithm, which spawns one ray
or more through each pixel of the pixels forming the image, into the space containing
the scene. Ray tracing integral images consumes more processing time than normal
images. The unique characteristics of the 3D integral camera model has been analysed
and it has been shown that different coherency aspects than normal ray tracing can be
investigated in order to accelerate the generation of photo-realistic integral images.
The image-space coherence has been analysed describing the relation between rays and
projected shadows in the scene rendered. Shadow cache algorithm has been adapted in
order to minimise shadow intersection tests in integral ray tracing. Shadow intersection
tests make the majority of the intersection tests in ray tracing. Novel pixel-tracing
styles are developed uniquely for integral ray tracing to improve the image-space
coherence and the performance of the shadow cache algorithm. Acceleration of the
photo-realistic integral images generation using the image-space coherence information
between shadows and rays in integral ray tracing has been achieved with up to 41 % of
time saving. Also, it has been proven that applying the new styles of pixel-tracing does
not affect of the scalability of integral ray tracing running over parallel computers.
The novel integral reprojection algorithm has been developed uniquely through
geometrical analysis of the generation of integral image in order to use the tempo-spatial
coherence information within the integral frames. A new derivation of integral
projection matrix for projecting points through an axial model of a lenticular lens has
been established. Rapid generation of 3D photo-realistic integral frames has been
achieved with a speed four times faster than the normal generation
A directional occlusion shading model for interactive direct volume rendering
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading
- …