200 research outputs found

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Solutions to aliasing in time-resolved flow data

    Full text link
    Avoiding aliasing in time-resolved flow data obtained through high fidelity simulations while keeping the computational and storage costs at acceptable levels is often a challenge. Well-established solutions such as increasing the sampling rate or low-pass filtering to reduce aliasing can be prohibitively expensive for large data sets. This paper provides a set of alternative strategies for identifying and mitigating aliasing that are applicable even to large data sets. We show how time-derivative data, which can be obtained directly from the governing equations, can be used to detect aliasing and to turn the ill-posed problem of removing aliasing from data into a well-posed problem, yielding a prediction of the true spectrum. Similarly, we show how spatial filtering can be used to remove aliasing for convective systems. We also propose strategies to avoid aliasing when generating a database, including a method tailored for computing nonlinear forcing terms that arise within the resolvent framework. These methods are demonstrated using large-eddy simulation (LES) data for a subsonic turbulent jet and a non-linear Ginzburg-Landau model.Comment: 25 pages, 14 figure

    Frequency Based Radiance Cache for Rendering Animations

    Get PDF
    International audienceWe propose a method to render animation sequences with direct distant lighting that only shades a fraction of the total pixels. We leverage frequency-based analyses of light transport to determine shading and image sampling rates across an animation using a samples cache. To do so, we derive frequency bandwidths that account for the complexity of distant lights, visibility, BRDF, and temporal coherence during animation. We finaly apply a cross-bilateral filter when rendering our final images from sparse sets of shading points placed according to our frequency-based oracles (generally < 25% of the pixels, per frame)

    Efficient global illumination for dynamic scenes

    Get PDF
    The production of high quality animations which feature compelling lighting effects is computationally a very heavy task when traditional rendering approaches are used where each frame is computed separately. The fact that most of the computation must be restarted from scratch for each frame leads to unnecessary redundancy. Since temporal coherence is typically not exploited, temporal aliasing problems are also more difficult to address. Many small errors in lighting distribution cannot be perceived by human observers when they are coherent in temporal domain. However, when such a coherence is lost, the resulting animations suffer from unpleasant flickering effects. In this thesis, we propose global illumination and rendering algorithms, which are designed specifically to combat those problems. We achieve this goal by exploiting temporal coherence in the lighting distribution between the subsequent animation frames. Our strategy relies on extending into temporal domain wellknown global illumination and rendering techniques such as density estimation path tracing, photon mapping, ray tracing, and irradiance caching, which have been originally designed to handle static scenes only. Our techniques mainly focus on the computation of indirect illumination, which is the most expensive part of global illumination modelling.Die Erstellung von hochqualitativen 3D-Animationen mit anspruchsvollen Lichteffekten ist für traditionelle Renderinganwendungen, bei denen jedes Bild separat berechnet wird, sehr aufwendig. Die Tatsache jedes Bild komplett neu zu berechnen führt zu unnötiger Redundanz. Wenn temporale Koherenz vernachlässigt wird, treten unter anderem auch schwierig zu behandelnde temporale Aliasingprobleme auf. Viele kleine Fehler in der Beleuchtungsberechnung eines Bildes können normalerweise nicht wahr genommen werden. Wenn jedoch die temporale Koherenz zwischen aufeinanderfolgenden Bildern fehlt, treten störende Flimmereffekte auf. In dieser Arbeit stellen wir globale Beleuchtungsalgorithmen vor, die die oben genannten Probleme behandeln. Dies erreichen wir durch Ausnutzung von temporaler Koherenz zwischen aufeinanderfolgenden Einzelbildern einer Animation. Unsere Strategy baut auf die klassischen globalen Beleuchtungsalgorithmen wie "Path tracing", "Photon Mapping" und "Irradiance Caching" auf und erweitert diese in die temporale Domäne. Dabei beschränken sich unsereMethoden hauptsächlich auf die Berechnung indirekter Beleuchtung, welche den zeitintensivsten Teil der globalen Beleuchtungsberechnung darstellt

    Efficient global illumination for dynamic scenes

    Get PDF
    The production of high quality animations which feature compelling lighting effects is computationally a very heavy task when traditional rendering approaches are used where each frame is computed separately. The fact that most of the computation must be restarted from scratch for each frame leads to unnecessary redundancy. Since temporal coherence is typically not exploited, temporal aliasing problems are also more difficult to address. Many small errors in lighting distribution cannot be perceived by human observers when they are coherent in temporal domain. However, when such a coherence is lost, the resulting animations suffer from unpleasant flickering effects. In this thesis, we propose global illumination and rendering algorithms, which are designed specifically to combat those problems. We achieve this goal by exploiting temporal coherence in the lighting distribution between the subsequent animation frames. Our strategy relies on extending into temporal domain wellknown global illumination and rendering techniques such as density estimation path tracing, photon mapping, ray tracing, and irradiance caching, which have been originally designed to handle static scenes only. Our techniques mainly focus on the computation of indirect illumination, which is the most expensive part of global illumination modelling.Die Erstellung von hochqualitativen 3D-Animationen mit anspruchsvollen Lichteffekten ist für traditionelle Renderinganwendungen, bei denen jedes Bild separat berechnet wird, sehr aufwendig. Die Tatsache jedes Bild komplett neu zu berechnen führt zu unnötiger Redundanz. Wenn temporale Koherenz vernachlässigt wird, treten unter anderem auch schwierig zu behandelnde temporale Aliasingprobleme auf. Viele kleine Fehler in der Beleuchtungsberechnung eines Bildes können normalerweise nicht wahr genommen werden. Wenn jedoch die temporale Koherenz zwischen aufeinanderfolgenden Bildern fehlt, treten störende Flimmereffekte auf. In dieser Arbeit stellen wir globale Beleuchtungsalgorithmen vor, die die oben genannten Probleme behandeln. Dies erreichen wir durch Ausnutzung von temporaler Koherenz zwischen aufeinanderfolgenden Einzelbildern einer Animation. Unsere Strategy baut auf die klassischen globalen Beleuchtungsalgorithmen wie "Path tracing", "Photon Mapping" und "Irradiance Caching" auf und erweitert diese in die temporale Domäne. Dabei beschränken sich unsereMethoden hauptsächlich auf die Berechnung indirekter Beleuchtung, welche den zeitintensivsten Teil der globalen Beleuchtungsberechnung darstellt

    Perception-based global illumination, rendering, and animation techniques

    Full text link

    Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames

    Get PDF
    International audienceRecent work has shown that distributing Monte Carlo errors as a blue noise in screen space improves the perceptual quality of rendered images. However, obtaining such distributions remains an open problem with high sample counts and high-dimensional rendering integrals. In this paper, we introduce a temporal algorithm that aims at overcoming these limitations. Our algorithm is applicable whenever multiple frames are rendered, typically for animated sequences or interactive applications. Our algorithm locally permutes the pixel sequences (represented by their seeds) to improve the error distribution across frames. Our approach works regardless of the sample count or the dimensionality and significantly improves the images in low-varying screen-space regions under coherent motion. Furthermore, it adds negligible overhead compared to the rendering times
    corecore