60 research outputs found

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Splatting multiresolution volume data using the feature graph

    Get PDF
    We propose to represent classified datasets as a feature graph storing different graphical models and attributes for each feature. This graph allows us to render each feature according to its own characteristics. In addition, we show that various features of the graph storing volume information at different resolution levels can be rendered together using a view-aligned splatting method. Moreover, we propose a 2D kernel function for splats that is easy to tune and generates smaller footprints that reduce the render time. Our algorithm provides images with less blur. It enhances the boundary of the features while avoiding the subdivision of homogeneous regions of the volume.Postprint (published version

    Real-time rendering of large surface-scanned range data natively on a GPU

    Get PDF
    This thesis presents research carried out for the visualisation of surface anatomy data stored as large range images such as those produced by stereo-photogrammetric, and other triangulation-based capture devices. As part of this research, I explored the use of points as a rendering primitive as opposed to polygons, and the use of range images as the native data representation. Using points as a display primitive as opposed to polygons required the creation of a pipeline that solved problems associated with point-based rendering. The problems inves tigated were scattered-data interpolation (a common problem with point-based rendering), multi-view rendering, multi-resolution representations, anti-aliasing, and hidden-point re- moval. In addition, an efficient real-time implementation on the GPU was carried out

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Image-space visibility ordering for cell projection volume rendering of unstructured data

    Full text link

    Real-Time deep image rendering and order independent transparency

    Get PDF
    In computer graphics some operations can be performed in either object space or image space. Image space computation can be advantageous, especially with the high parallelism of GPUs, improving speed, accuracy and ease of implementation. For many image space techniques the information contained in regular 2D images is limiting. Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what we call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques. This thesis investigates deep images and their growing use in real-time image space applications. A focus is new techniques for improving fundamental operation performance, including construction, storage, fast fragment sorting and sampling. A core and driving application is order-independent transparency (OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering we look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach. Using these ideas a more computationally complex application is investigated — image based depth of field (DoF). Deep images are used to provide partial occlusion, and in particular a form of deep image mipmapping allows a fast approximate defocus blur of up to full screen size

    A shading reuse method for efficient micropolygon ray tracing

    Full text link

    Fotonien kartoitus reaaliajassa: Epäsuoran valaistuksen soveltamista dynaamisille ympäristöille reaaliajassa

    Get PDF
    The focus of this thesis is to provide better methods to simulate the behaviour of light in synthesis of photo-realistic images for real-time applications. Improvements introduced in this work are related to indirect component of the illumination, also known as global illumination, in which the contributed light has already been reflected from surface at least once. While there are a number of effective global illumination techniques based on precomputation that work well with static scenes, including global illumination for scenes with dynamic lighting and dynamic geometry remains a challenging problem. In this thesis, we describe a real-time global illumination algorithm based on photon mapping that evaluates several bounces of indirect lighting without any precomputed data in scenes with both dynamic lighting and fully dynamic geometry. To make photon mapping possible within the performance limitations of the real-time rendering, we utilize and expand on several optimization methods, such as reflective shadow maps, stratified sampling and Russian Roulette. Furthermore, we introduce an improved distribution kernel for the screen space irradiance estimation of the photon mapping. Finally, we present a new filtering solution for photon mapping.Opinnäytetyön painopisteenä on tarjota parempia menetelmiä valon käyttäytymisen simuloimiseksi reaaliaikaisten sovelluksien realistisessa kuvasynteesissä. Tässä työssä esitetyt parannukset liittyvät valaistuksen epäsuoraan komponenttiin, (tunnetaan myös globaalina valaistuksena), jossa valo on kulkenut ainakin yhden pintaheijastuksen kautta. On olemassa tehokkaita globaaleja valaistustekniikoita, jotka perustuvat ennakkotietoon. Nämä tekniikat toimivat hyvin staattisten ympäristöjen kanssa, mutta dynaamisen valaistusta ja geometriaa ympäristöt ovat edelleen haastava ongelma. Tässä opinnäytetyössä kuvataan reaaliaikainen globaali valaistusalgoritmi, joka perustuu fotonikartoitukseen ja jossa arvioidaan useita epäsuoran valaistuksen askelmia ilman ennalta laskettua. Jotta fotonikartoitus olisi mahdollista reaaliaikaisen renderoinnin suorituskyvyn määrittämissä rajoitteissa, käytämme useita optimointimenetelmiä, kuten heijastavia varjo-karttoja, kerrostettuja näytteitä ja venäläistä rulettia. Lisäksi esitämme parannetun distribuutiokernelin fotonikartoituksen säteilytysvoimakkuuden estimoinnille. Lopuksi esitämme uuden suodatusratkaisun fotonikartoitukseen
    corecore