37,494 research outputs found
Towards a High Quality Real-Time Graphics Pipeline
Modern graphics hardware pipelines create photorealistic images with high geometric complexity in real time. The quality is constantly improving and advanced techniques from feature film visual effects, such as high dynamic range images and support for higher-order surface primitives, have recently been adopted. Visual effect techniques have large computational costs and significant memory bandwidth usage. In this thesis, we identify three problem areas and propose new algorithms that increase the performance of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time graphics pipeline, but parts of our research are equally applicable to offline rendering. Our first focus is texture compression, which is a technique to reduce the memory bandwidth usage. The core idea is to store images in small compressed blocks which are sent over the memory bus and are decompressed on-the-fly when accessed. We present compression algorithms for two types of texture formats. High dynamic range images capture environment lighting with luminance differences over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and give the illusion of high geometric surface detail. Our compression formats are tailored to these texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression logic. Our second focus is tessellation culling. Culling is a commonly used technique in computer graphics for removing work that does not contribute to the final image, such as completely hidden geometry. By discarding rendering primitives from further processing, substantial arithmetic computations and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation stages, where rendering primitives are subdivided for increased geometric detail. Images with highly detailed models can be synthesized, but the incurred cost is significant. We have devised a simple remapping technique that allowsfor better tessellation distribution in screen space. Furthermore, we present programmable tessellation culling, where bounding volumes for displaced geometry are computed and used to conservatively test if a primitive can be discarded before tessellation. We introduce a general tessellation culling framework, and an optimized algorithm for rendering of displaced Bézier patches, which is expected to be a common use case for graphics hardware tessellation. Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic rasterization, a rendering technique where camera effects such as depth of field and motion blur can be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal space and show that stochastic motion blur can be rendered with rather modest pipeline modifications. Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented, which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings us closer to high quality real-time stochastic rendering
Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs
The idea of computer vision as the Bayesian inverse problem to computer
graphics has a long history and an appealing elegance, but it has proved
difficult to directly implement. Instead, most vision tasks are approached via
complex bottom-up processing pipelines. Here we show that it is possible to
write short, simple probabilistic graphics programs that define flexible
generative models and to automatically invert them to interpret real-world
images. Generative probabilistic graphics programs consist of a stochastic
scene generator, a renderer based on graphics software, a stochastic likelihood
model linking the renderer's output and the data, and latent variables that
adjust the fidelity of the renderer and the tolerance of the likelihood model.
Representations and algorithms from computer graphics, originally designed to
produce high-quality images, are instead used as the deterministic backbone for
highly approximate and stochastic generative models. This formulation combines
probabilistic programming, computer graphics, and approximate Bayesian
computation, and depends only on general-purpose, automatic inference
techniques. We describe two applications: reading sequences of degraded and
adversarially obscured alphanumeric characters, and inferring 3D road models
from vehicle-mounted camera images. Each of the probabilistic graphics programs
we present relies on under 20 lines of probabilistic code, and supports
accurate, approximately Bayesian inferences about ambiguous real-world images.Comment: The first two authors contributed equally to this wor
- …