471 research outputs found

    Integrating Occlusion Culling and Hardware Instancing for Efficient Real-Time Rendering of Building Information Models

    Get PDF
    This paper presents an efficient approach for integrating occlusion culling and hardware instancing. The work is primarily targeted at Building Information Models (BIM), which typically share characteristics addressed by these two acceleration techniques separately – high level of occlusion and frequent reuse of building components. Together, these two acceleration techniques complement each other and allows large and complex BIMs to be rendered in real-time. Specifically, the proposed method takes advantage of temporal coherence and uses a lightweight data transfer strategy to provide an efficient hardware instancing implementation. Compared to only using occlusion culling, additional speedups of 1.25x-1.7x is achieved for rendering large BIMs received from real-world projects. These speedups are measured in viewpoints that represents the worst case scenarios in terms of rendering performance when only occlusion culling is utilized

    CGAMES'2009

    Get PDF

    Efficient algorithms for occlusion culling and shadows

    Get PDF
    The goal of this research is to develop more efficient techniques for computing the visibility and shadows in real-time rendering of three-dimensional scenes. Visibility algorithms determine what is visible from a camera, whereas shadow algorithms solve the same problem from the viewpoint of a light source. In rendering, a lot of computational resources are often spent on primitives that are not visible in the final image. One visibility algorithm for reducing the overhead is occlusion culling, which quickly discards the objects or primitives that are obstructed from the view by other primitives. A new method is presented for performing occlusion culling using silhouettes of meshes instead of triangles. Additionally, modifications are suggested to occlusion queries in order to reduce their computational overhead. The performance of currently available graphics hardware depends on the ordering of input primitives. A new technique, called delay streams, is proposed as a generic solution to order-dependent problems. The technique significantly reduces the pixel processing requirements by improving the efficiency of occlusion culling inside graphics hardware. Additionally, the memory requirements of order-independent transparency algorithms are reduced. A shadow map is a discretized representation of the scene geometry as seen by a light source. Typically the discretization causes difficult aliasing issues, such as jagged shadow boundaries and incorrect self-shadowing. A novel solution is presented for suppressing all types of aliasing artifacts by providing the correct sampling points for shadow maps, thus fully abandoning the previously used regular structures. Also, a simple technique is introduced for limiting the shadow map lookups to the pixels that get projected inside the shadow map. The fillrate problem of hardware-accelerated shadow volumes is greatly reduced with a new hierarchical rendering technique. The algorithm performs per-pixel shadow computations only at visible shadow boundaries, and uses lower resolution shadows for the parts of the screen that are guaranteed to be either fully lit or fully in shadow. The proposed techniques are expected to improve the rendering performance in most real-time applications that use 3D graphics, especially in computer games. More efficient algorithms for occlusion culling and shadows are important steps towards larger, more realistic virtual environments.reviewe

    Conservative occlusion culling for urban visualization using a slice-wise data structure

    Get PDF
    Cataloged from PDF version of article.In this paper, we propose a framework for urban visualization using a conservative from-region visibility algorithm based on occluder shrinking. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is buildings, process these partially visible buildings as if they are completely visible. To address the problem of partial visibility, we propose a data structure, called slice-wise data structure, that represents buildings in terms of slices parallel to the coordinate axes. We observe that the visible parts of the objects usually have simple shapes. This observation establishes the base for occlusion-culling where the occlusion granularity is individual slices. The proposed slice-wise data structure has minimal storage requirements. We also propose to shrink general 3D occluders in a scene to find volumetric occlusion. Empirical results show that significant increase in frame rates and decrease in the number of processed polygons can be achieved using the proposed slice-wise occlusion-culling as compared to an occlusion-culling method where the granularity is individual buildings. © 2007 Elsevier Inc. All rights reserved

    Doctor of Philosophy

    Get PDF
    dissertationRay tracing presents an efficient rendering algorithm for scientific visualization using common visualization tools and scales with increasingly large geometry counts while allowing for accurate physically-based visualization and analysis, which enables enhanced rendering and new visualization techniques. Interactivity is of great importance for data exploration and analysis in order to gain insight into large-scale data. Increasingly large data sizes are pushing the limits of brute-force rasterization algorithms present in the most widely-used visualization software. Interactive ray tracing presents an alternative rendering solution which scales well on multicore shared memory machines and multinode distributed systems while scaling with increasing geometry counts through logarithmic acceleration structure traversals. Ray tracing within existing tools also provides enhanced rendering options over current implementations, giving users additional insight from better depth cues while also enabling publication-quality rendering and new models of visualization such as replicating photographic visualization techniques

    Real-time rendering of cities at night

    Get PDF
    En synthĂšse d’images, dĂ©terminer la couleur d’une surface au pixel d’une image doit considĂ©rer toutes les sources de lumiĂšre de la scĂšne pour Ă©valuer leur contribution lumineuse sur la surface en question. Cette Ă©valuation de la visibilitĂ© et en l’occurrence de la radiance incidente des sources de lumiĂšre est trĂšs coĂ»teuse. Elle n’est gĂ©nĂ©ralement pas traitĂ©e pour chaque source de lumiĂšre en rendu temps-rĂ©el. Une ville en pleine nuit est un exemple de telle scĂšne comportant une grande quantitĂ© de sources de lumiĂšre pour lesquelles les rendus temps-rĂ©el modernes ne peuvent pas Ă©valuer la visibilitĂ© de toutes les sources de lumiĂšre individuelles. Nous prĂ©sentons une technique exploitant la cohĂ©rence spatiale des villes et la co-hĂ©rence temporelle des rendus temps-rĂ©el pour accĂ©lĂ©rer le calcul de la visibilitĂ© des sources de lumiĂšre. Notre technique de visibilitĂ© profite des bloqueurs naturels et prĂ©-dominants de la ville pour rapidement rĂ©duire la liste de sources de lumiĂšre Ă  Ă©valuer etainsi, accĂ©lĂ©rer le calcul de la visibilitĂ© en assumant des bloqueurs sous forme de boĂźtes alignĂ©es majoritairement selon certains axes dominants. Pour garantir la propagation des occultations, nous fusionnons les bloqueurs adjacents dans un seul et mĂȘme bloqueur conservateur en termes d’occultations. Notre technique relie la visibilitĂ© de la camĂ©ra avec la visibilitĂ© des surfaces pour rĂ©duire le nombre d’évaluations Ă  effectuer Ă  chaque rendu, et ne calcule la visibilitĂ© que pour les surfaces visibles du point de vue de la camĂ©ra. Finalement, nous intĂ©grons la technique de visibilitĂ© avec une technique de rendu rĂ©aliste, Lightcuts, qui a Ă©tĂ© mise Ă  jour sur GPU dans un scĂ©nario de rendu temps-rĂ©el. MĂȘme si notre technique ne permettra pas d’atteindre le temps-rĂ©el en gĂ©nĂ©ral dans une scĂšne complexe, elle rĂ©duit suffisamment les contraintes pour espĂ©rer y arriver un jour.In image synthesis, to determine the final color of a surface at a specific image pixel,we must consider all potential light sources and evaluate if they contribute to the illumination. Since such evaluation is slow, real-time renderers traditionally do not evaluate each light source, and instead preemptively choose locally important light sources for which to evaluate visibility. A city at night is such a scene containing many light sources for which modern real-time renderers cannot allow themselves to evaluate every light source at every frame.We present a technique exploiting spatial coherency in cities and temporal coherency of real-time walkthroughs to reduce visibility evaluations in such scenes. Our technique uses the natural and predominant occluders of a city to efficiently reduce the number of light sources to evaluate. To further accelerate the evaluation we project the bounding boxes of buildings instead of their detailed model (these boxes should be oriented mostly along a few directions), and fuse adjacent occluders on an occlusion plane to form larger conservative occluders. Our technique also integrates results from camera visibility to further reduce the number of visibility evaluations executed per frame, and evaluates visible light sources for facades visible from the point of view of the camera. Finally, we integrate an offline rendering technique, Lightcuts, by adapting it to real-time GPU rendering to further save on rendering time.Even though our technique does not achieve real-time frame rates in a complex scene,it reduces the complexity of the problem enough so that we can hope to achieve such frame rates one day

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Triangle Dropping: An occluded-geometry predictor for energy-efficient mobile GPUs

    Get PDF
    This article proposes a novel micro-architecture approach for mobile GPUs aimed at early removing the occluded geometry in a scene by leveraging frame-to-frame coherence, thus reducing the overall energy consumption. Mobile GPUs commonly implement a Tile-Based Rendering (TBR) architecture that differentiates two main phases: the Geometry Pipeline, where all the geometry of a scene is processed; and the Raster Pipeline, where primitives are rendered in a framebuffer. After the Geometry Pipeline, only non-culled primitives inside the camera’s frustum are stored into the Parameter Buffer, a data structure stored in DRAM. However, among the non-culled primitives there is a significant amount that are rendered but non-visible at all, resulting in useless computations. On average, 60% of those primitives are completely occluded in our benchmarks. Despite TBR architectures use on-chip caches for the Parameter Buffer, about 46% of the DRAM traffic still comes from accesses to such buffer. The proposed Triangle Dropping technique leverages the visibility information computed along the Raster Pipeline to predict the primitives’ visibility in the next frame to early discard those that will be totally occluded, drastically reducing Parameter Buffer accesses. On average, our approach achieves overall 14.5% energy savings, 28.2% energy-delay product savings, and a speedup of 20.2%.This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant no. 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00 (AEI/FEDER, EU), and the ICREA Academia program. D. Corbalán-Navarro has been also supported by a PhD research fellowship from the University of Murcia’s “Plan Propio de Investigación.Peer ReviewedPostprint (author's final draft

    Methods for Automated Creation and Efficient Visualisation of Large-Scale Terrains based on Real Height-Map Data

    Get PDF
    Real-time rendering of large-scale terrains is a difficult problem and remains an active field of research. The massive scale of these landscapes, where the ratio between the size of the terrain and its resolution is spanning multiple orders of magnitude, requires an efficient level of detail strategy. It is crucial that the geometry, as well as the terrain data, are represented seamlessly at varying distances while maintaining a constant visual quality. This thesis investigates common techniques and previous solutions to problems associated with the rendering of height field terrains and discusses their benefits and drawbacks. Subsequently, two solutions to the stated problems are presented, which build and expand upon the state-of-the-art rendering methods. A seamless and efficient mesh representation is achieved by the novel Uniform Distance-Dependent Level of Detail (UDLOD) triangulation method. This fully GPU-based algorithm subdivides a quadtree covering the terrain into small tiles, which can be culled in parallel, and are morphed seamlessly in the vertex shader, resulting in a densely and temporally consistent triangulated mesh. The proposed Chunked Clipmap combines the strengths of both quadtrees and clipmaps to enable efficient out-of-core paging of terrain data. This data structure allows for constant time view-dependent access, graceful degradation if data is unavailable, and supports trilinear and anisotropic filtering. Together these, otherwise independent, techniques enable the rendering of large-scale real-world terrains, which is demonstrated on a dataset encompassing the entire Free State of Saxony at a resolution of one meter, in real-time
    • 

    corecore