544 research outputs found
An Overview of Modern Global Illumination
Advancements in graphical hardware call for innovative solutions, which can improve the realism of computer generated lighting. These innovative solutions aim to generate state of the art computer generated lighting through a combination of intelligent global illumination models and the use of modern hardware. The solution described in this paper achieves global illumination by ray tracing over geometry within a 3D scene from distributed light field probes and proceeds to shade the scene with a deferred renderer. Such a solution provides the flexibility and robustness that many other global illumination models have previously lacked while still achieving realistic lighting that is representative of the capabilities of the operating hardware
Real-Time Global Illumination for VR Applications
Real-time global illumination in VR systems enhances scene realism by incorporating soft shadows, reflections of objects in the scene, and color bleeding. The Virtual Light Field (VLF) method enables real-time global illumination rendering in VR. The VLF has been integrated with the Extreme VR system for realtime GPU-based rendering in a Cave Automatic Virtual Environment
The Impact of Surface Normals on Appearance
The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way
Web-Based Dynamic Paintings: Real-Time Interactive Artworks in Web Using a 2.5D Pipeline
In this work, we present a 2.5D pipeline approach to creating dynamic
paintings that can be re-rendered interactively in real-time on the Web. Using
this 2.5D approach, any existing simple painting such as portraits can be
turned into an interactive dynamic web-based artwork. Our interactive system
provides most global illumination effects such as reflection, refraction,
shadow, and subsurface scattering by processing images. In our system, the
scene is defined only by a set of images. These include (1) a shape image, (2)
two diffuse images, (3) a background image, (4) one foreground image, and (5)
one transparency image. A shape image is either a normal map or a height. Two
diffuse images are usually hand-painted. They are interpolated using
illumination information. The transparency image is used to define the
transparent and reflective regions that can reflect the foreground image and
refract the background image, both of which are also hand-drawn. This
framework, which mainly uses hand-drawn images, provides qualitatively
convincing painterly global illumination effects such as reflection and
refraction. We also include parameters to provide additional artistic controls.
For instance, using our piecewise linear Fresnel function, it is possible to
control the ratio of reflection and refraction. This system is the result of a
long line of research contributions. On the other hand, the art-directed
Fresnel function that provides physically plausible compositing of reflection
and refraction with artistic control is completely new. Art-directed warping
equations that provide qualitatively convincing refraction and reflection
effects with linearized artistic control are also new. You can try our
web-based system for interactive dynamic real-time paintings at
http://mock3d.tamu.edu/.Comment: 22 page
Virtual light fields for global illumination in computer graphics
This thesis presents novel techniques for the generation and real-time rendering of globally illuminated
environments with surfaces described by arbitrary materials. Real-time rendering of globally illuminated
virtual environments has for a long time been an elusive goal. Many techniques have been developed
which can compute still images with full global illumination and this is still an area of active flourishing
research. Other techniques have only dealt with certain aspects of global illumination in order to speed
up computation and thus rendering. These include radiosity, ray-tracing and hybrid methods. Radiosity
due to its view independent nature can easily be rendered in real-time after pre-computing and storing
the energy equilibrium. Ray-tracing however is view-dependent and requires substantial computational
resources in order to run in real-time.
Attempts at providing full global illumination at interactive rates include caching methods, fast rendering
from photon maps, light fields, brute force ray-tracing and GPU accelerated methods. Currently,
these methods either only apply to special cases, are incomplete exhibiting poor image quality and/or
scale badly such that only modest scenes can be rendered in real-time with current hardware.
The techniques developed in this thesis extend upon earlier research and provide a novel, comprehensive
framework for storing global illumination in a data structure - the Virtual Light Field - that is
suitable for real-time rendering. The techniques trade off rapid rendering for memory usage and precompute
time. The main weaknesses of the VLF method are targeted in this thesis. It is the expensive
pre-compute stage with best-case O(N^2) performance, where N is the number of faces, which make the
light propagation unpractical for all but simple scenes. This is analysed and greatly superior alternatives
are presented and evaluated in terms of efficiency and error. Several orders of magnitude improvement
in computational efficiency is achieved over the original VLF method.
A novel propagation algorithm running entirely on the Graphics Processing Unit (GPU) is presented.
It is incremental in that it can resolve visibility along a set of parallel rays in O(N) time and can
produce a virtual light field for a moderately complex scene (tens of thousands of faces), with complex illumination
stored in millions of elements, in minutes and for simple scenes in seconds. It is approximate
but gracefully converges to a correct solution; a linear increase in resolution results in a linear increase in
computation time. Finally a GPU rendering technique is presented which can render from Virtual Light
Fields at real-time frame rates in high resolution VR presentation devices such as the CAVETM
Photon Splatting Using a View-Sample Cluster Hierarchy
Splatting photons onto primary view samples, rather than gathering from a photon acceleration structure, can be a more efficient approach to evaluating the photon-density estimate in interactive applications, where the number of photons is often low compared to the number of view samples. Most photon splatting approaches struggle with large photon radii or high resolutions due to overdraw and insufficient culling. In this paper, we show how dynamic real-time diffuse interreflection can be achieved by using a full 3D acceleration structure built over the view samples and then splatting photons onto the view samples by traversing this data structure. Full dynamic lighting and scenes are possible by tracing and splatting photons, and rebuilding the acceleration structure every frame. We show that the number of view-sample/photon tests can be significantly reduced and suggest further culling techniques based on the normal cone of each node in the hierarchy. Finally, we present an approximate variant of our algorithm where photon traversal is stopped at a fixed level of our hierarchy, and the incoming radiance is accumulated per node and direction, rather than per view sample. This improves performance significantly with little visible degradation of quality
Lichttransportsimulation auf Spezialhardware
It cannot be denied that the developments in computer hardware and in computer algorithms strongly influence each other, with new instructions added to help with video processing, encryption, and in many other areas. At the same time, the current cap on single threaded performance and wide availability of multi-threaded processors has increased the focus on parallel algorithms. Both influences are extremely prominent in computer graphics, where the gaming and movie industries always strive for the best possible performance on the current, as well as future, hardware.
In this thesis we examine the hardware-algorithm synergies in the context of ray tracing and Monte-Carlo algorithms. First, we focus on the very basic element of all such algorithms - the casting of rays through a scene, and propose a dedicated hardware unit to accelerate this common operation. Then, we examine existing and novel implementations of many Monte-Carlo rendering algorithms on massively parallel hardware, as full hardware utilization is essential for peak performance. Lastly, we present an algorithm for tackling complex interreflections of glossy materials, which is designed to utilize both powerful processing units present in almost all current computers: the Centeral Processing Unit (CPU) and the Graphics Processing Unit (GPU). These three pieces combined show that it is always important to look at hardware-algorithm mapping on all levels of abstraction: instruction, processor, and machine.Zweifelsohne beeinflussen sich Computerhardware und Computeralgorithmen gegenseitig in ihrer Entwicklung: Prozessoren bekommen neue Instruktionen, um zum Beispiel Videoverarbeitung, Verschlüsselung oder andere Anwendungen zu beschleunigen. Gleichzeitig verstärkt sich der Fokus auf parallele Algorithmen, bedingt durch die limitierte Leistung von für einzelne Threads und die inzwischen breite Verfügbarkeit von multi-threaded Prozessoren. Beide Einflüsse sind im Grafikbereich besonders stark , wo es z.B. für die Spiele- und Filmindustrie wichtig ist, die bestmögliche Leistung zu erreichen, sowohl auf derzeitiger und zukünftiger Hardware.
In Rahmen dieser Arbeit untersuchen wir die Synergie von Hardware und Algorithmen anhand von Ray-Tracing- und Monte-Carlo-Algorithmen. Zuerst betrachten wir einen grundlegenden Hardware-Bausteins für alle diese Algorithmen, die Strahlenverfolgung in einer Szene, und präsentieren eine spezielle Hardware-Einheit zur deren Beschleunigung. Anschließend untersuchen wir existierende und neue Implementierungen verschiedener MonteCarlo-Algorithmen auf massiv-paralleler Hardware, wobei die maximale Auslastung der Hardware im Fokus steht. Abschließend stellen wir dann einen Algorithmus zur Berechnung von komplexen Beleuchtungseffekten bei glänzenden Materialien vor, der versucht, die heute fast überall vorhandene Kombination aus Hauptprozessor (CPU) und Grafikprozessor (GPU) optimal auszunutzen. Zusammen zeigen diese drei Aspekte der Arbeit, wie wichtig es ist, Hardware und Algorithmen auf allen Ebenen gleichzeitig zu betrachten: Auf den Ebenen einzelner Instruktionen, eines Prozessors bzw. eines gesamten Systems
- …