9,117 research outputs found
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Applications of computer-graphics animation for motion-perception research
The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events
Ergonomics of the Operative Field in Paediatric Minimal Access Surgery
Imperial Users onl
The Variable Reflection Nebula Cepheus A East
We report K'-band imaging observations of the reflection nebula associated
with Cepheus A East covering the time interval from 1990 to 2004. Over this
time the reflection nebula shows variations of flux distribution, which we
interpret as the effect of inhomogeneous and varying extinction in the light
path from the illuminating source HW2 to the reflection nebula. The obscuring
material is located within typical distances of approximately 10 AU from the
illuminating source.Comment: 22 pages, including 6 figures, accepted for publication in The
Astronomical Journa
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
The quality of modern astronomical data, the power of modern computers and
the agility of current image-processing software enable the creation of
high-quality images in a purely digital form. The combination of these
technological advancements has created a new ability to make color astronomical
images. And in many ways it has led to a new philosophy towards how to create
them. A practical guide is presented on how to generate astronomical images
from research data with powerful image-processing programs. These programs use
a layering metaphor that allows for an unlimited number of astronomical
datasets to be combined in any desired color scheme, creating an immense
parameter space to be explored using an iterative approach. Several examples of
image creation are presented.
A philosophy is also presented on how to use color and composition to create
images that simultaneously highlight scientific detail and are aesthetically
appealing. This philosophy is necessary because most datasets do not correspond
to the wavelength range of sensitivity of the human eye. The use of visual
grammar, defined as the elements which affect the interpretation of an image,
can maximize the richness and detail in an image while maintaining scientific
accuracy. By properly using visual grammar, one can imply qualities that a
two-dimensional image intrinsically cannot show, such as depth, motion and
energy. In addition, composition can be used to engage viewers and keep them
interested for a longer period of time. The use of these techniques can result
in a striking image that will effectively convey the science within the image,
to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A
Improving biomedical image quality with computers
Computerized image enhancement techniques used on biomedical radiographs and photomicrograph
Monocular tracking of the human arm in 3D: real-time implementation and experiments
We have developed a system capable of tracking a human arm in 3D and in real time. The system is based on a previously developed algorithm for 3D tracking which requires only a monocular view and no special markers on the body. In this paper we describe our real-time system and the insights gained from real-time experimentation
Real-time Shadows for Gigapixel Displacement Maps
Shadows portray helpful information in scenes. From a scientific visualization standpoint, they help to add data without unnecessary clutter. In video games they add realism and depth. In common graphics pipelines, due to the independent and parallel rendering of geometric primitives, shadows are difficult to achieve. Objects require knowledge of each other and therefore multiple renders are needed to collect the necessary data. The collection of this data comes with its own set of trade offs. Our research involves adding shadows into a lunar rendering framework developed by Dr. Robert Kooima. The NASA-collected data contains a multi-gigapixel displacement map describing the lunar topology. This map does not fit entirely into main memory and therefore out-of-core paging is utilized to achieve real-time speeds. Current shadow techniques do not attempt to generate occluder data on such a scale, and therefore we have developed a novel approach to fit this situation. By using a chain of pre-processing steps, we analyze the structure of the displacement map and calculate horizon lines at each vertex. This information is saved into several images and used to generate shadows in a single pass, maintaining real-time speeds. The algorithm is even capable of generating soft shadows without extra information or loss of speed. We compare our algorithm with common approaches in the field as well as two forms of ground truth; one from ray tracing and the other from the gigapixel lunar texture data, showing real shadows at the time it was collected
- …