2,718 research outputs found

    Shadow mapping algorithms: Applications and limitations

    Get PDF
    This study provides an overview of popular and famous algorithms and techniques in shadow maps generation.Well- known techniques in shadow maps generation is described detail, along with a discussion of the advantages and drawbacks of each. Basic ideas, improvements and future works of the techniques are also comprehensively summarized and analyzed in depth. Often, programmers have difficulty selecting an appropriate shadow generation algorithm that is specific to their purpose. We have classified and systemized these techniques. The main goal of this paper is to provide researchers with background on a variety of shadow mapping techniques so as make it easier for them to choose the method best suited to their aims. It is al-so hoped that our analysis will help researchers find solutions to the shortcomings of each technique. © 2015 NSP Natural Sciences Publishing Co

    Scalable Interactive Volume Rendering Using Off-the-shelf Components

    Get PDF
    This paper describes an application of a second generation implementation of the Sepia architecture (Sepia-2) to interactive volu-metric visualization of large rectilinear scalar fields. By employingpipelined associative blending operators in a sort-last configuration a demonstration system with 8 rendering computers sustains 24 to 28 frames per second while interactively rendering large data volumes (1024x256x256 voxels, and 512x512x512 voxels). We believe interactive performance at these frame rates and data sizes is unprecedented. We also believe these results can be extended to other types of structured and unstructured grids and a variety of GL rendering techniques including surface rendering and shadow map-ping. We show how to extend our single-stage crossbar demonstration system to multi-stage networks in order to support much larger data sizes and higher image resolutions. This requires solving a dynamic mapping problem for a class of blending operators that includes Porter-Duff compositing operators

    Soft bilateral filtering shadows using multiple image-based algorithms

    Get PDF
    This study introduces Soft Bilateral Filtering Shadows method of dynamic scenes, which uses multi-matrices of the light sample points due to lack realism in soft shadows generation in real time. While geometry-based shadow algorithm requires one pass per polygon for rendering shadow that requires time-consuming, the adopted shadow map algorithm needs a single rendering pass for each sample point of the light source to generate shadow at low cost. This method renders a complex scenes and accurately eliminating the inherent deficiencies in shadow maps. In order to compute shadow maps, view matrices were used for each sample point of the extended light source. Then penumbra region was used for interpolation based on bilateral filtering to create the soft shadows. They depend on multiple shadow maps which provide antialiasing shadow maps. The method uses fragment shader for rendering multiple shadow maps with penumbra and umbra regions. The main contribution of this article is introducing interpolation bilaterally of image-based shadows. This method makes the most effect of the computation significantly appear at the edges of the penumbra region. Furthermore, the filtering allows to obtain on the soft shadow marvelously at the lowest number possible of the light sample points. The generated soft shadows have good performance and high quality therefore, they are suitable for interactive applications. © 2016 Springer Science+Business Media New Yor

    Shadow Generation in Augmented Reality: A Complete Survey

    Get PDF
    This paper provides an overview of the issues and techniques involved in shadow generation in mixed reality environments. Shadow generation techniques in virtual environments are explained briefly. The key factors characterizing the well-known techniques are described in detail and the pros and cons of each technique are discussed. The conceptual perspective, the improvements, and future techniques are also investigated, sum- marized, and analysed in depth. This paper aims to provide researchers with a solid background on the state- of-the-art implementation of shadows in mixed reality. Thus, this could make it easier to choose the most appropriate method to achieve the aims. It is also hoped that this analysis will help researchers find solutions to the problems facing each technique

    Differentiable Shadow Mapping for Efficient Inverse Graphics

    Full text link
    We show how shadows can be efficiently generated in differentiable rendering of triangle meshes. Our central observation is that pre-filtered shadow mapping, a technique for approximating shadows based on rendering from the perspective of a light, can be combined with existing differentiable rasterizers to yield differentiable visibility information. We demonstrate at several inverse graphics problems that differentiable shadow maps are orders of magnitude faster than differentiable light transport simulation with similar accuracy -- while differentiable rasterization without shadows often fails to converge.Comment: CVPR 2023, project page: https://mworchel.github.io/differentiable-shadow-mappin

    Soft Textured Shadow Volume

    Get PDF
    International audienceEfficiently computing robust soft shadows is a challenging and time consuming task. On the one hand, the quality of image-based shadows is inherently limited by the discrete property of their framework. On the other hand, object-based algorithms do not exhibit such discretization issues but they can only efficiently deal with triangles having a constant transmittance factor. This paper addresses this limitation. We propose a general algorithm for the computation of robust and accurate soft shadows for triangles with a spatially varying transmittance. We then show how this technique can be efficiently included into object-based soft shadow algorithms. This results in unified object-based frameworks for computing robust direct shadows for both standard and perforated triangles in fully animated scenes

    Real-time Shadows for Gigapixel Displacement Maps

    Get PDF
    Shadows portray helpful information in scenes. From a scientific visualization standpoint, they help to add data without unnecessary clutter. In video games they add realism and depth. In common graphics pipelines, due to the independent and parallel rendering of geometric primitives, shadows are difficult to achieve. Objects require knowledge of each other and therefore multiple renders are needed to collect the necessary data. The collection of this data comes with its own set of trade offs. Our research involves adding shadows into a lunar rendering framework developed by Dr. Robert Kooima. The NASA-collected data contains a multi-gigapixel displacement map describing the lunar topology. This map does not fit entirely into main memory and therefore out-of-core paging is utilized to achieve real-time speeds. Current shadow techniques do not attempt to generate occluder data on such a scale, and therefore we have developed a novel approach to fit this situation. By using a chain of pre-processing steps, we analyze the structure of the displacement map and calculate horizon lines at each vertex. This information is saved into several images and used to generate shadows in a single pass, maintaining real-time speeds. The algorithm is even capable of generating soft shadows without extra information or loss of speed. We compare our algorithm with common approaches in the field as well as two forms of ground truth; one from ray tracing and the other from the gigapixel lunar texture data, showing real shadows at the time it was collected

    Scalable ray tracing with multiple GPGPUs

    Get PDF
    Rapid development in the field of computer graphics over the last 40 years has brought forth different techniques to render scenes. Rasterization is today’s most widely used technique, which in its most basic form sequentially draws thousands of polygons and applies texture on them. Ray tracing is an alternative method that mimics light transport by using rays to sample a scene in memory and render the color found at each ray’s scene intersection point. Although mainstream hardware directly supports rasterization, ray tracing would be the preferred technique due to its ability to produce highly crisp and realistic graphics, if hardware were not a limitation. Making an immediate hardware transition from rasterization to ray tracing would have a severe impact on the computer graphics industry since it would require redevelopment of existing 3D graphics-employing software, so any transition to ray tracing would be gradual. Previous efforts to perform ray tracing on mainstream rasterizing hardware platforms with a single processor have performed poorly. This thesis explores how a multiple GPGPU system can be used to render scenes via ray tracing. A ray tracing engine and API groundwork was developed using NVIDIA’s CUDA (Compute Unified Device Architecture) GPGPU programming environment and was used to evaluate performance scalability across a multi-GPGPU system. This engine supports triangle, sphere, disc, rectangle, and torus rendering. It also allows independent activation of graphics features including procedural texturing, Phong illumination, reflections, translucency, and shadows. Correctness of rendered images validates the ray traced results, and timing of rendered scenes benchmarks performance. The main test scene contains all object types, has a total of 32 Abstract objects, and applies all graphics features. Ray tracing this scene using two GPGPUs outperformed the single-GPGPU and single-CPU systems, yielding respective speedups of up to 1.8 and 31.25. The results demonstrate how much potential exists in treating a modern dual-GPU architecture as a dual-GPGPU system in order to facilitate a transition from rasterization to ray tracing
    • 

    corecore