1,854 research outputs found

    Visually pleasing real-time global illumination rendering for fully-dynamic scenes

    Get PDF
    Global illumination (GI) rendering plays a crucial role in the photo-realistic rendering of virtual scenes. With the rapid development of graphics hardware, GI has become increasingly attractive even for real-time applications nowadays. However, the computation of physically-correct global illumination is time-consuming and cannot achieve real-time, or even interactive performance. Although the realtime GI is possible using a solution based on precomputation, such a solution cannot deal with fully-dynamic scenes. This dissertation focuses on solving these problems by introducing visually pleasing real-time global illumination rendering for fully-dynamic scenes. To this end, we develop a set of novel algorithms and techniques for rendering global illumination effects using the graphics hardware. All these algorithms not only result in real-time or interactive performance, but also generate comparable quality to the previous works in off-line rendering. First, we present a novel implicit visibility technique to circumvent expensive visibility queries in hierarchical radiosity by evaluating the visibility implicitly. Thereafter, we focus on rendering visually plausible soft shadows, which is the most important GI effect caused by the visibility determination. Based on the pre-filtering shadowmapping theory, wesuccessively propose two real-time soft shadow mapping methods: "convolution soft shadow mapping" (CSSM) and "variance soft shadow mapping" (VSSM). Furthermore, we successfully apply our CSSM method in computing the shadow effects for indirect lighting. Finally, to explore the GI rendering in participating media, we investigate a novel technique to interactively render volume caustics in the single-scattering participating media.Das Rendern globaler Beleuchtung ist für die fotorealistische Darstellung virtueller Szenen von entscheidender Bedeutung. Dank der rapiden Entwicklung der Grafik-Hardware wird die globale Beleuchtung heutzutage sogar für Echtzeitanwendungen immer attraktiver. Trotz allem ist die Berechnung physikalisch korrekter globaler Beleuchtung zeitintensiv und interaktive Laufzeiten können mit "standard Hardware" noch nicht erzielt werden. Obwohl das Rendering auf der Grundlage von Vorberechnungen in Echtzeit möglich ist, kann ein solcher Ansatz nicht auf voll-dynamische Szenen angewendet werden. Diese Dissertation zielt darauf ab, das Problem der globalen Beleuchtungsberechnung durch Einführung von neuen Techniken für voll-dynamische Szenen in Echtzeit zu lösen. Dazu stellen wir eine Reihe neuer Algorithmen vor, die die Effekte der globaler Beleuchtung auf der Grafik-Hardware berechnen. All diese Algorithmen erzielen nicht nur Echtzeit bzw. interaktive Laufzeiten sondern liefern auch eine Qualität, die mit bisherigen offline Methoden vergleichbar ist. Zunächst präsentieren wir eine neue Technik zur Berechnung impliziter Sichtbarkeit, die aufwändige Sichbarkeitstests in hierarchischen Radiosity-Datenstrukturen vermeidet. Anschliessend stellen wir eine Methode vor, die weiche Schatten, ein wichtiger Effekt für die globale Beleuchtung, in Echtzeit berechnet. Auf der Grundlage der Theorie über vorgefilterten Schattenwurf, zeigen wir nacheinander zwei Echtzeitmethoden zur Berechnung weicher Schattenwürfe: "Convolution Soft Shadow Mapping" (CSSM) und "Variance Soft Shadow Mapping" (VSSM). Darüber hinaus wenden wir unsere CSSM-Methode auch erfolgreich auf den Schatteneffekt in der indirekten Beleuchtung an. Abschliessend präsentieren wir eine neue Methode zum interaktiven Rendern von Volumen-Kaustiken in einfach streuenden, halbtransparenten Medien

    NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient Illumination

    Full text link
    Recent advances in implicit neural representation have demonstrated the ability to recover detailed geometry and material from multi-view images. However, the use of simplified lighting models such as environment maps to represent non-distant illumination, or using a network to fit indirect light modeling without a solid basis, can lead to an undesirable decomposition between lighting and material. To address this, we propose a fully differentiable framework named neural ambient illumination (NeAI) that uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way. Together with integral lobe encoding for roughness-adaptive specular lobe and leveraging the pre-convoluted background for accurate decomposition, the proposed method represents a significant step towards integrating physically based rendering into the NeRF representation. The experiments demonstrate the superior performance of novel-view rendering compared to previous works, and the capability to re-render objects under arbitrary NeRF-style environments opens up exciting possibilities for bridging the gap between virtual and real-world scenes. The project and supplementary materials are available at https://yiyuzhuang.github.io/NeAI/.Comment: Project page: <a class="link-external link-https" href="https://yiyuzhuang.github.io/NeAI/" rel="external noopener nofollow">https://yiyuzhuang.github.io/NeAI/</a

    An Investigation of How Lighting and Rendering Technology Affects Filmmaking Relative to Arnold’s Transition to a GPU-Based Path-Tracer

    Get PDF
    Computer Graphic (CGI) technology enables artists to explore a broad spectrum of approaches and styles, from photorealistic to abstract, expanding the boundaries of traditional aesthetic choices. Recent years have witnessed of 3D-CGI production shift towards greater physical fidelity driven by technological developments as well as consumer demand for realistic visuals; this trend can be found across various creative fields like film, video games, and virtual reality experiences with high-quality textures, lighting, rendering, and physics simulations providing enhanced levels of immersion for users. Arnold is one of the famous rendering engines assisting artists to be more creative while producing photorealistic images. Moreover, Arnold renders the engine as one of the main path-tracing renderers and contributes significantly to more fantastic photorealistic productions. Also, Arnold renders not only Support CPU render but also support GPU rendering to take full advantage of faster computation times and real-time interactivity, among many other advantages. Because of that, this study investigates how new technology like developed GPUs helps artists and filmmakers better comprehend 3D rendering solutions that impact their workflows. On the other hand, philosophically exploring the relationship between making a creative decision and technology within 3D photorealistic rendering reveals an intricate yet dynamic relationship that informs the creative processes of both independent artists and small studios alike. This interaction serves as a reminder that Art is driven forward by its creator\u27s creative energy rather than simply technological capabilities; artists and studios can continue pushing limits by embracing this complex dialogue between creativity and tech, opening new paths within digital Art\u27s fast-evolving realm

    Doctor of Philosophy

    Get PDF
    dissertationReal-time global illumination is the next frontier in real-time rendering. In an attempt to generate realistic images, games have followed the film industry into physically based shading and will soon begin integrating global illumination techniques. Traditional methods require too much memory and too much time to compute for real-time use. With Modular and Delta Radiance Transfer we precompute a scene-independent, low-frequency basis that allows us to calculate complex indirect lighting calculations in a much lower dimensional subspace with a reduced memory footprint and real-time execution. The results are then applied as a light map on many different scenes. To improve the low frequency results, we also introduce a novel screen space ambient occlusion technique that allows us to generate a smoother result with fewer samples. These three techniques, low and high frequency used together, provide a viable indirect lighting solution that can be run in milliseconds on today's hardware, providing a useful new technique for indirect lighting in real-time graphics

    A simplified HDR image processing pipeline for digital photography

    Get PDF
    High Dynamic Range (HDR) imaging has revolutionized the digital imaging. It allows capture, storage, manipulation, and display of full dynamic range of the captured scene. As a result, it has spawned whole new possibilities for digital photography, from photorealistic to hyper-real. With all these advantages, the technique is expected to replace the conventional 8-bit Low Dynamic Range (LDR) imaging in the future. However, HDR results in an even more complex imaging pipeline including new techniques for capturing, encoding, and displaying images. The goal of this thesis is to bridge the gap between conventional imaging pipeline to the HDR’s in as simple a way as possible. We make three contributions. First we show that a simple extension of gamma encoding suffices as a representation to store HDR images. Second, gamma as a control for image contrast can be ‘optimally’ tuned on a per image basis. Lastly, we show a general tone curve, with detail preservation, suffices to tone map an image (there is only a limited need for the expensive spatially varying tone mappers). All three of our contributions are evaluated psychophysically. Together they support our general thesis that an HDR workflow, similar to that already used in photography, might be used. This said, we believe the adoption of HDR into photography is, perhaps, less difficult than it is sometimes posed to be
    • …
    corecore