7,201 research outputs found

    Deferred Shading

    Get PDF
    Práce se zabývá problematikou návrhu a implementace výukového programu pro demonstraci techniky deferred shading a jejich možností. Intuitivní a interaktivní formou se snaží vysvětlit principy osvětlování a stínování pouze viditelných pixelů obrazu na základě atributové mapy vytvořené při rasterizaci geometrie. Zpracování geometrie je tedy kompletně odděleno od samotného procesu stínování.Work deals with design and implementation a tutorial for demonstration deferred shading technique and its possibilities. It explains lighting and shading principles in intuitive and interactive way. Deferred shading is a technique which determines pixel color after the geometry rasterization of the entire scene. In other words the processing of geometry does not interfere with the shading process.

    Implementació de deferred shading

    Get PDF
    L'objectiu d'aquest projecte és implementar i comparar algunes de les diferents tècniques de shading que s'utilitzen per renderitzar escenes amb il·luminació dinàmica i veure els avantatges i inconvenients que comporten. Les tècniques que compararem són: Forward Render, Deferred Shading i Forward

    Training and Predicting Visual Error for Real-Time Applications

    Full text link
    Visual error metrics play a fundamental role in the quantification of perceived image similarity. Most recently, use cases for them in real-time applications have emerged, such as content-adaptive shading and shading reuse to increase performance and improve efficiency. A wide range of different metrics has been established, with the most sophisticated being capable of capturing the perceptual characteristics of the human visual system. However, their complexity, computational expense, and reliance on reference images to compare against prevent their generalized use in real-time, restricting such applications to using only the simplest available metrics. In this work, we explore the abilities of convolutional neural networks to predict a variety of visual metrics without requiring either reference or rendered images. Specifically, we train and deploy a neural network to estimate the visual error resulting from reusing shading or using reduced shading rates. The resulting models account for 70%-90% of the variance while achieving up to an order of magnitude faster computation times. Our solution combines image-space information that is readily available in most state-of-the-art deferred shading pipelines with reprojection from previous frames to enable an adequate estimate of visual errors, even in previously unseen regions. We describe a suitable convolutional network architecture and considerations for data preparation for training. We demonstrate the capability of our network to predict complex error metrics at interactive rates in a real-time application that implements content-adaptive shading in a deferred pipeline. Depending on the portion of unseen image regions, our approach can achieve up to 2×2\times performance compared to state-of-the-art methods.Comment: Published at Proceedings of the ACM in Computer Graphics and Interactive Techniques. 14 Pages, 16 Figures, 3 Tables. For paper website and higher quality figures, see https://jaliborc.github.io/rt-percept

    Decoupled deferred shading for hardware rasterization

    Full text link

    Practical morphological antialiasing on the GPU

    Get PDF
    International audienceThe subject of antialiasing techniques has been actively explored for the past 40 years. The classical approach involves computing the average of multiple samples for each final sample. Graphics hardware vendors implement various refinements of these algorithms. Computing multiple samples (MSAA) can be very costly depending on the complexity of the shading, or in the case of raytracing. Moreover, image-space techniques like deferred shading are incompatible with hardware implementation of MSAA since the lighting stage is decorrelated from the geometry stage. A filter based approach called Morphological Antialiasing (MLAA) was recently introduced [2009]. This technique does not need multiple samples and can efficiently be implemented on CPU using vector instructions. However, this filter is not linear and requires deep branching and image-wise knowledge which can be very inefficient on graphics hardware. We introduce an efficient adaptation of the MLAA algorithm running flawlessly on medium range GPUs
    corecore