150 research outputs found

    Tessellated Voxelization for Global Illumination using Voxel Cone Tracing

    Get PDF
    Modeling believable lighting is a crucial component of computer graphics applications, including games and modeling programs. Physically accurate lighting is complex and is not currently feasible to compute in real-time situations. Therefore, much research is focused on investigating efficient ways to approximate light behavior within these real-time constraints. In this thesis, we implement a general purpose algorithm for real-time applications to approximate indirect lighting. Based on voxel cone tracing, we use a filtered representation of a scene to efficiently sample ambient light at each point in the scene. We present an approach to scene voxelization using hardware tessellation and compare it with an approach utilizing hardware rasterization. We also investigate possible methods of warped voxelization. Our contributions include a complete and open-source implementation of voxel cone tracing along with both voxelization algorithms. We find similar performance and quality with both voxelization algorithms

    GPU voxelization

    Get PDF
    Given a triangulated model, we want to identify which voxels of a voxel grid are intersected by the boundary of this model. There are other branch of implemented voxelizations, in which not only the boundary is detected, also the interior of the model. Often these voxels are cubes. But it is not a restriction, there are other presented techniques in which the voxel grid is the view frustum, and voxels are prisms. There are di erent kind of voxelizations depending on the rasterization behavior. Approximate rasterization is the standard way of rasterizing fragments in GPU. It means only those fragments whose center lies inside the projection of the primitive are identi ed. Conservative rasterization (Hasselgren et al. , 2005) involves a dilation operation over the primitive. This is done in GPU to ensure that in the rasterization stage all the intersected fragments have its center inside the dilated primitive. However, this can produce spurious fragments, non-intersected pixels. Exact voxelization detects only those voxels that we need.

    Many-Light Real-Time Global Illumination using Sparse Voxel Octree

    Get PDF
    Global illumination (GI) rendering simulates the propagation of light through a 3D volume and its interaction with surfaces, dramatically increasing the fidelity of computer generated images. While off-line GI algorithms such as ray tracing and radiosity can generate physically accurate images, their rendering speeds are too slow for real-time applications. The many-light method is one of many novel emerging real-time global illumination algorithms. However, it requires many shadow maps to be generated for Virtual Point Light (VPL) visibility tests, which reduces its efficiency. Prior solutions restrict either the number or accuracy of shadow map updates, which may lower the accuracy of indirect illumination or prevent the rendering of fully dynamic scenes. In this thesis, we propose a hybrid real-time GI algorithm that utilizes an efficient Sparse Voxel Octree (SVO) ray marching algorithm for visibility tests instead of the shadow map generation step of the many-light algorithm. Our technique achieves high rendering fidelity at about 50 FPS, is highly scalable and can support thousands of VPLs generated on the fly. A survey of current real-time GI techniques as well as details of our implementation using OpenGL and Shader Model 5 are also presented

    3D mesh voxelization

    Get PDF
    Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadèmic: 2018/2019This paper tries to make an approach to the use of voxels as a basic element of a videogame agent, both in terms of visualization and interaction through the use of a Unity game engine plug-in

    OpenFab: A programmable pipeline for multimaterial fabrication

    Get PDF
    Figure 1: Three rhinos, defined and printed using OpenFab. For each print, the same geometry was paired with a different fablet—a shaderlike program which procedurally defines surface detail and material composition throughout the object volume. This produces three unique prints by using displacements, texture mapping, and continuous volumetric material variation as a function of distance from the surface. 3D printing hardware is rapidly scaling up to output continuous mixtures of multiple materials at increasing resolution over ever larger print volumes. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data and simply modeling and describing the input with spatially varying material mixtures at this scale is challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multi-material 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. We describe a streaming architecture for OpenFab; only a small fraction of the final volume is stored in memory and output is fed to the printer with little startup delay. We demonstrate it on a variety of multi-material objects

    Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX

    Full text link
    A vital component of photo-realistic image synthesis is the simulation of indirect diffuse reflections, which still remain a quintessential hurdle that modern rendering engines struggle to overcome. Real-time applications typically pre-generate diffuse lighting information offline using radiosity to avoid performing costly computations at run-time. In this thesis we present a variant of progressive refinement radiosity that utilizes Nvidia's novel RTX technology to accelerate the process of form-factor computation without compromising on visual fidelity. Through a modern implementation built on DirectX 12 we demonstrate that offloading radiosity's visibility component to RT cores significantly improves the lightmap generation process and potentially propels it into the domain of real-time.Comment: 114 page

    Real-time screen space reflections and refractions using sparse voxel octrees

    Get PDF
    This thesis explores the data structure known as sparse voxel octree and how it can improve the performance of real-time ray tracing. While ray tracing is an excellent way of producing realistic effects in computer graphics it is also very computationally heavy. Its use in real-time applications such as games and simulators is therefore limited since the hardware must be able to render enough frames per second to satisfy the users. The purpose of an octree is to reduce the amount of intersection tests each ray needs significantly. This thesis will explain the many challenges when implementing and using an octree, and also how to solve them. This includes both how to build the tree using various tests and then also how to use it with a ray tracer to produce reflections and refractions in real time

    GPU voxelization

    Get PDF
    Given a triangulated model, we want to identify which voxels of a voxel grid are intersected by the boundary of this model. There are other branch of implemented voxelizations, in which not only the boundary is detected, also the interior of the model. Often these voxels are cubes. But it is not a restriction, there are other presented techniques in which the voxel grid is the view frustum, and voxels are prisms. There are di erent kind of voxelizations depending on the rasterization behavior. Approximate rasterization is the standard way of rasterizing fragments in GPU. It means only those fragments whose center lies inside the projection of the primitive are identi ed. Conservative rasterization (Hasselgren et al. , 2005) involves a dilation operation over the primitive. This is done in GPU to ensure that in the rasterization stage all the intersected fragments have its center inside the dilated primitive. However, this can produce spurious fragments, non-intersected pixels. Exact voxelization detects only those voxels that we need.

    GPU propagation and visualisation of particle collisions with ALICE magnetic field model

    Full text link
    The ALICE Collaboration at CERN developed a 3D visualisation tool capable of displaying a representation of collected collision data (particle trajectories, clusters and calorimeter towers) called the Event Display. The Event Display is constantly running in the ALICE Run Control Center as part of the Quality Assurance system, providing the monitoring personnel with visual cues about possible problems of both hardware and software components during periods of data gathering. In the software, particle trajectories (which are curved due to presence of magnetic field inside the detector) are generated from physical parameters of detected particles, such as electrical charge and momentum. Previously this process in the Event Display used a uniform, constant magnetic field for these calculations, which differs from the spatial variations of the real magnetic field and does not model one of the two magnets used in the detector. Recently, a detailed model of ALICE magnetic field was made available as a shader program for execution on the GPU. In this work we attempt to implement the reconstruction algorithm in a shader form as well, allowing us to combine it with the detailed model to create a full solution for rendering trajectories from collision event data directly on the GPU. This approach has several possible advantages, such as better performance and the ability to alter the magnetic field properties in real-time. This was not previously done for ALICE and as such could be used in the future to upgrade the Event Display
    • …
    corecore