358 research outputs found

    Temporal light field reconstruction for rendering distribution effects

    Get PDF
    Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the high-dimensional integrand. In this paper, we describe a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor. We show that our technique can be applied in situations that are challenging or impossible for previous anisotropic reconstruction methods, and that it can yield good results with very sparse inputs. We demonstrate our method for simultaneous motion blur, depth of field, and soft shadows

    Factored axis-aligned filtering for rendering multiple distribution effects

    Get PDF
    Monte Carlo (MC) ray-tracing for photo-realistic rendering often requires hours to render a single image due to the large sampling rates needed for convergence. Previous methods have attempted to filter sparsely sampled MC renders but these methods have high reconstruction overheads. Recent work has shown fast performance for individual effects, like soft shadows and indirect illumination, using axis-aligned filtering. While some components of light transport such as indirect or area illumination are smooth, they are often multiplied by high-frequency components such as texture, which prevents their sparse sampling and reconstruction. We propose an approach to adaptively sample and filter for simultaneously rendering primary (defocus blur) and secondary (soft shadows and indirect illumination) distribution effects, based on a multi-dimensional frequency analysis of the direct and indirect illumination light fields. We describe a novel approach of factoring texture and irradiance in the presence of defocus blur, which allows for pre-filtering noisy irradiance when the texture is not noisy. Our approach naturally allows for different sampling rates for primary and secondary effects, further reducing the overall ray count. While the theory considers only Lambertian surfaces, we obtain promising results for moderately glossy surfaces. We demonstrate 30x sampling rate reduction compared to equal quality noise-free MC. Combined with a GPU implementation and low filtering over-head, we can render scenes with complex geometry and diffuse and glossy BRDFs in a few seconds.National Science Foundation (U.S.) (Grant CGV 1115242)National Science Foundation (U.S.) (Grant CGV 1116303)Intel Corporation (Science and Technology Center for Visual Computing

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    The Role of Light and Shadow in the Perception of Photographs

    Get PDF
    The photographer\u27s awareness of the light in the scene is the key to good exposure, whether it is technically correct or an expression of creativity. The article deals with the role of light and its absence, shadow, in the field of photography. Several studies from history and the present are presented, linking the perception of light and its shadows from philosophical, artistic and technological points of view, illustrating the vastness and applicability of this field of research. Summarized studies and an eye tracking analysis of shadow perception support the author\u27s claims about the role of light and shadow in the perception of Henri Cartier-Bresson\u27s photographs. It is shown how shadows can reveal more to the viewer about the space surrounding the observed scene, function as individual objects, or even be used creatively to enhance the detection of focus within a photograph

    Foveated Path Tracing with Fast Reconstruction and Efficient Sample Distribution

    Get PDF
    Polunseuranta on tietokonegrafiikan piirtotekniikka, jota on käytetty pääasiassa ei-reaaliaikaisen realistisen piirron tekemiseen. Polunseuranta tukee luonnostaan monia muilla tekniikoilla vaikeasti saavutettavia todellisen valon ilmiöitä kuten heijastuksia ja taittumista. Reaaliaikainen polunseuranta on hankalaa polunseurannan suuren laskentavaatimuksen takia. Siksi nykyiset reaaliaikaiset polunseurantasysteemi tuottavat erittäin kohinaisia kuvia, jotka tyypillisesti suodatetaan jälkikäsittelykohinanpoisto-suodattimilla. Erittäin immersiivisiä käyttäjäkokemuksia voitaisiin luoda polunseurannalla, joka täyttäisi laajennetun todellisuuden vaatimukset suuresta resoluutiosta riittävän matalassa vasteajassa. Yksi mahdollinen ratkaisu näiden vaatimusten täyttämiseen voisi olla katsekeskeinen polunseuranta, jossa piirron resoluutiota vähennetään katseen reunoilla. Tämän johdosta piirron laatu on katseen reunoilla sekä harvaa että kohinaista, mikä asettaa suuren roolin lopullisen kuvan koostavalle suodattimelle. Tässä työssä esitellään ensimmäinen reaaliajassa toimiva regressionsuodatin. Suodatin on suunniteltu kohinaisille kuville, joissa on yksi polunseurantanäyte pikseliä kohden. Nopea suoritus saavutetaan tiileissä käsittelemällä ja nopealla sovituksen toteutuksella. Lisäksi työssä esitellään Visual-Polar koordinaattiavaruus, joka jakaa polunseurantanäytteet siten, että niiden jakauma seuraa silmän herkkyysmallia. Visual-Polar-avaruuden etu muihin tekniikoiden nähden on että se vähentää työmäärää sekä polunseurannassa että suotimessa. Nämä tekniikat esittelevät toimivan prototyypin katsekeskeisestä polunseurannasta, ja saattavat toimia tienraivaajina laajamittaiselle realistisen reaaliaikaisen polunseurannan käyttöönotolle.Photo-realistic offline rendering is currently done with path tracing, because it naturally produces many real-life light effects such as reflections, refractions and caustics. These effects are hard to achieve with other rendering techniques. However, path tracing in real time is complicated due to its high computational demand. Therefore, current real-time path tracing systems can only generate very noisy estimate of the final frame, which is then denoised with a post-processing reconstruction filter. A path tracing-based rendering system capable of filling the high resolution in the low latency requirements of mixed reality devices would generate a very immersive user experience. One possible solution for fulfilling these requirements could be foveated path tracing, wherein the rendering resolution is reduced in the periphery of the human visual system. The key challenge is that the foveated path tracing in the periphery is both sparse and noisy, placing high demands on the reconstruction filter. This thesis proposes the first regression-based reconstruction filter for path tracing that runs in real time. The filter is designed for highly noisy one sample per pixel inputs. The fast execution is accomplished with blockwise processing and fast implementation of the regression. In addition, a novel Visual-Polar coordinate space which distributes the samples according to the contrast sensitivity model of the human visual system is proposed. The specialty of Visual-Polar space is that it reduces both path tracing and reconstruction work because both of them can be done with smaller resolution. These techniques enable a working prototype of a foveated path tracing system and may work as a stepping stone towards wider commercial adoption of photo-realistic real-time path tracing

    Frequency Analysis and Sheared Reconstruction for Rendering Motion Blur

    Get PDF
    International audienceMotion blur is crucial for high-quality rendering but is also very expensive. Our first contribution is a frequency analysis of motion-blurred scenes, including moving objects, specular reflections, and shadows. We show that motion induces a shear in the frequency domain, and that the spectrum of moving scenes is usually contained in a wedge. This allows us to compute adaptive space-time sampling rates, to accelerate rendering. For uniform velocities and standard axis-aligned reconstruction, we show that the product of spatial and temporal bandlimits or sampling rates is constant, independent of velocity. Our second contribution is a novel sheared reconstruction filter that tightly packs the wedge of frequencies in the Fourier domain, and enables even lower sampling rates. We present a rendering algorithm that computes a sheared reconstruction filter per pixel, without any intermediate Fourier representation. This often permits synthesis of motion-blurred images with far fewer rendering samples than standard techniques require
    corecore