6 research outputs found

    Voxelbasert 3D visualisering i OpenGL

    Get PDF
    This thesis deals with volume rendering in OpenGL and looks at the different areas at which 3D modeling and visualization is used. The thesis focuses on voxel rendering, its advantages and drawbacks, and also the implementation of such a renderer. The aim of the thesis was to develop a voxel renderer in OpenGL from scratch, to a fully functional application that could visualize different 3D data sets generated from for example Diffpack. The data sets are scalar fields which are visualized by associating transparency and color to voxels from the values in the data sets. There are multiple ways to visualize voxels, I have mainly used a method that uses textures mapped to a 2D plane, which are assembled into a 3D voxel set. This is a method that is supported by common 3D hardware. To get maximum performance from the different 3D graphics cards that are available, you can use different graphics libraries. For PC’s there are two low-level graphics libraries to choose from, OpenGL and DirectX. OpenGL is developed by Silicon Graphics and are compatible with a number of different operating systems. DirectX is developed by Microsoft and is only supported in Microsoft Windows. For this thesis I chose OpenGL as the tool to use. OpenGL is a powerful software library which utilizes modern graphics hardware. Through OpenGL you get access to most of the graphic cards functions. OpenGL is however a low-level library and demands a lot of knowledge on a fundamental level to be able to visualize complex objects and scenes. I therefore take a closer look at how OpenGL works and at the theory at which it is built. I also look at the opportunities, advantages and drawbacks with voxel rendering, and looks at the requirements which is demanded by the hardware to be able to solve the tasks efficiently. I conclude this thesis by comparing my OpenGL voxel renderer with other available voxel renderers, such as The Visualization Toolkit (VTK)

    Decoupled Sampling for Real-Time Graphics Pipelines

    Get PDF
    We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates

    Decoupled Sampling for Graphics Pipelines

    Get PDF
    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading. We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates

    Real-time Global Illumination by Simulating Photon Mapping

    Get PDF
    corecore