141 research outputs found

    Master of Science in Computing

    Get PDF
    thesisThis document introduces the Soft Shadow Mip-Maps technique, which consists of three methods for overcoming the fundamental limitations of filtering-oriented soft shadows. Filtering-oriented soft shadowing techniques filter shadow maps with varying filter sizes determined by desired penumbra widths. Different varieties of this approach have been commonly applied in interactive and real-time applications. Nonetheless, they share some fundamental limitations. First, soft shadow filter size is not always guaranteed to be the correct size for producing the right penumbra width based on the light source size. Second, filtering with large kernels for soft shadows requires a large number of samples, thereby increasing the cost of filtering. Stochastic approximations for filtering introduce noise and prefiltering leads to inaccuracies. Finally, calculating shadows based on a single blocker estimation can produce significantly inaccurate penumbra widths when the shadow penumbras of different blockers overlap. We discuss three methods to overcome these limitations. First, we introduce a method for computing the soft shadow filter size for a receiver with a blocker distance. Then, we present a filtering scheme based on shadow mip-maps. Mipmap-based filtering uses shadow mip-maps to efficiently generate soft shadows using a constant size filter kernel for each layer, and linear interpolation between layers. Finally, we introduce an improved blocker estimation approach. With the improved blocker estimaiton, we explore the shadow contribution of every blocker by calculating the light occluded by potential blockers. Hence, the calculated penumbra areas correspond to the blockers correctly. Finally, we discuss how to select filter kernels for filtering. These approaches successively solve issues regarding shadow penumbra width calculation apparent in prior techniques. Our result shows that we can produce correct penumbra widths, as evident in our comparisons to ray-traced soft shadows. Nonetheless, the Soft Shadow Mip-Maps technique suffers from light bleeding issues. This is because our method only calculates shadows using the geometry that is available in the shadow depth map. Therefore, the occluded geometry is not taken into consideration, which leads to light bleeding. Another limitation of our method is that using lower resolution shadow mip-map layers limits the resolution of the shadow placement. As a result, when a blocker moves slowly, its shadow follows it with discrete steps, the size of which is determined by the corresponding mip-map layer resolution

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif

    NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering

    Full text link
    Recent advances in neural implicit fields enables rapidly reconstructing 3D geometry from multi-view images. Beyond that, recovering physical properties such as material and illumination is essential for enabling more applications. This paper presents a new method that effectively learns relightable neural surface using pre-intergrated rendering, which simultaneously learns geometry, material and illumination within the neural implicit field. The key insight of our work is that these properties are closely related to each other, and optimizing them in a collaborative manner would lead to consistent improvements. Specifically, we propose NeuS-PIR, a method that factorizes the radiance field into a spatially varying material field and a differentiable environment cubemap, and jointly learns it with geometry represented by neural surface. Our experiments demonstrate that the proposed method outperforms the state-of-the-art method in both synthetic and real datasets

    Shallow waters simulation

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringRealistic simulation and rendering of water in real-time is a challenge within the field of computer graphics, as it is very computationally demanding. A common simulation approach is to reduce the problem from 3D to 2D by treating the water surface as a 2D heightfield. When simulating 2D fluids, the Shallow Water Equations (SWE) are often employed, which work under the assumption that the water’s horizontal scale is much greater than it’s vertical scale. There are several methods that have been developed or adapted to model the SWE, each with its own advantages and disadvantages. A common solution is to use grid-based methods where there is the classic approach of solving the equations in a grid, but also the Lattice-Boltzmann Method (LBM) which originated from the field of statistical physics. Particle based methods have also been used for modeling the SWE, namely as a variation of the popular Smoothed-Particle Hydrodynamics (SPH) method. This thesis presents an implementation for real-time simulation and rendering of a heightfield surface water volume. The water’s behavior is modeled by a grid-based SWE scheme with an efficient single kernel compute shader implementation. When it comes to visualizing the water volume created by the simulation, there are a variety of effects that can contribute to its realism and provide visual cues for its motion. In particular, When considering shallow water, there are certain features that can be highlighted, such as the refraction of the ground below and corresponding light attenuation, and the caustics patterns projected on it. Using the state produced by the simulation, a water surface mesh is rendered, where set of visual effects are explored. First, the water’s color is defined as a combination of reflected and transmitted light, while using a Cook- Torrance Bidirectional Reflectance Distribution Function (BRDF) to describe the Sun’s reflection. These results are then enhanced by data from a separate pass which provides caustics patterns and improved attenuation computations. Lastly, small-scale details are added to the surface by applying a normal map generated using noise. As part of the work, a thorough evaluation of the developed application is performed, providing a showcase of the results, insight into some of the parameters and options, and performance benchmarks.Simulação e renderização realista de água em tempo real é um desafio dentro do campo de computação gráfica, visto que é muito computacionalmente exigente. Uma abordagem comum de simulação é de reduzir o problema de 3D para 2D ao tratar a superfície da água como um campo de alturas 2D. Ao simular fluidos em 2D, é frequente usar as equações de águas rasas, que funcionam sobre o pressuposto de que a escala horizontal da água é muito maior que a sua escala vertical. Há vários métodos que foram desenvolvidos ou adaptados para modelar as equações de águas rasas, cada uma com as suas vantagens e desvantagens. Uma solução comum é utilizar métodos baseados em grelhas onde existe a abordagem clássica de resolver as equações numa grelha, mas também existe o método de Lattice Boltzmann que originou do campo de física estatística. Métodos baseados em partículas também já foram usados para modelar as equações de águas rasas, nomeadamente como uma variação do popular método de SPH. Esta tese apresenta uma implementação para simulação e renderização em tempo real de um volume de água com uma superfície de campo de alturas. O comportamento da água é modelado por um esquema de equações de águas rasas baseado na grelha com uma implementação eficiente de um único kernel de compute shader. No que toca a visualizar o volume de água criado pela simulação, existe uma variedade de efeitos que podem contribuir para o seu realismo e fornecer dicas visuais sobre o seu movimento. Ao considerar águas rasas, existem certas características que podem ser destacadas, como a refração do terreno por baixo e correspondente atenuação da luz, e padrões de cáusticas projetados nele. Usando o estado produzido pela simulação, uma malha da superfície da água é renderizada, onde um conjunto de efeitos visuais são explorados. Em primeiro lugar, a cor da água é definida como uma combinação de luz refletida e transmitida, sendo que uma BRDF de Cook-Torrance é usada para descrever a reflexão do Sol. Estes resultados são depois complementados com dados gerados num passo separado que fornece padrões de cáusticas e melhora as computações de atenuação. Por fim, detalhes de pequena escala são adicionados à superfície ao aplicar um mapa de normais gerado com ruído. Como parte do trabalho desenvolvido, é feita uma avaliação detalhada da aplicação desenvolvida, onde é apresentada uma demonstração dos resultados, comentários sobre alguns dos parâmetros e opções, e referências de desempenho

    Practical line rasterization for multi-resolution textures

    Get PDF
    Draping 2D vectorial information over a 3D terrain elevation model is usually performed by real-time rendering to texture. In the case of linear feature representation, there are several specific problems using the texturing approach, specially when using multi-resolution textures. These problems are related to visual quality, aliasing artifacts and rendering performance. In this paper, we address the problems of 2D line rasterization on a multi-resolution texturing engine from a pragmatical point of view; some alternative solutions are presented, compared and evaluated. For each solution we have analyzed the visual quality, the impact on the rendering performance and the memory consumption. The study performed in this work is based on an OpenGL implementation of a clipmap-based multi-resolution texturing system, and is oriented towards the use of inexpensive consumer graphics hardware. 1

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media
    corecore