155 research outputs found

    Conservative occlusion culling for urban visualization using a slice-wise data structure

    Get PDF
    Cataloged from PDF version of article.In this paper, we propose a framework for urban visualization using a conservative from-region visibility algorithm based on occluder shrinking. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is buildings, process these partially visible buildings as if they are completely visible. To address the problem of partial visibility, we propose a data structure, called slice-wise data structure, that represents buildings in terms of slices parallel to the coordinate axes. We observe that the visible parts of the objects usually have simple shapes. This observation establishes the base for occlusion-culling where the occlusion granularity is individual slices. The proposed slice-wise data structure has minimal storage requirements. We also propose to shrink general 3D occluders in a scene to find volumetric occlusion. Empirical results show that significant increase in frame rates and decrease in the number of processed polygons can be achieved using the proposed slice-wise occlusion-culling as compared to an occlusion-culling method where the granularity is individual buildings. © 2007 Elsevier Inc. All rights reserved

    Real-time rendering of cities at night

    Get PDF
    En synthèse d’images, déterminer la couleur d’une surface au pixel d’une image doit considérer toutes les sources de lumière de la scène pour évaluer leur contribution lumineuse sur la surface en question. Cette évaluation de la visibilité et en l’occurrence de la radiance incidente des sources de lumière est très coûteuse. Elle n’est généralement pas traitée pour chaque source de lumière en rendu temps-réel. Une ville en pleine nuit est un exemple de telle scène comportant une grande quantité de sources de lumière pour lesquelles les rendus temps-réel modernes ne peuvent pas évaluer la visibilité de toutes les sources de lumière individuelles. Nous présentons une technique exploitant la cohérence spatiale des villes et la co-hérence temporelle des rendus temps-réel pour accélérer le calcul de la visibilité des sources de lumière. Notre technique de visibilité profite des bloqueurs naturels et pré-dominants de la ville pour rapidement réduire la liste de sources de lumière à évaluer etainsi, accélérer le calcul de la visibilité en assumant des bloqueurs sous forme de boîtes alignées majoritairement selon certains axes dominants. Pour garantir la propagation des occultations, nous fusionnons les bloqueurs adjacents dans un seul et même bloqueur conservateur en termes d’occultations. Notre technique relie la visibilité de la caméra avec la visibilité des surfaces pour réduire le nombre d’évaluations à effectuer à chaque rendu, et ne calcule la visibilité que pour les surfaces visibles du point de vue de la caméra. Finalement, nous intégrons la technique de visibilité avec une technique de rendu réaliste, Lightcuts, qui a été mise à jour sur GPU dans un scénario de rendu temps-réel. Même si notre technique ne permettra pas d’atteindre le temps-réel en général dans une scène complexe, elle réduit suffisamment les contraintes pour espérer y arriver un jour.In image synthesis, to determine the final color of a surface at a specific image pixel,we must consider all potential light sources and evaluate if they contribute to the illumination. Since such evaluation is slow, real-time renderers traditionally do not evaluate each light source, and instead preemptively choose locally important light sources for which to evaluate visibility. A city at night is such a scene containing many light sources for which modern real-time renderers cannot allow themselves to evaluate every light source at every frame.We present a technique exploiting spatial coherency in cities and temporal coherency of real-time walkthroughs to reduce visibility evaluations in such scenes. Our technique uses the natural and predominant occluders of a city to efficiently reduce the number of light sources to evaluate. To further accelerate the evaluation we project the bounding boxes of buildings instead of their detailed model (these boxes should be oriented mostly along a few directions), and fuse adjacent occluders on an occlusion plane to form larger conservative occluders. Our technique also integrates results from camera visibility to further reduce the number of visibility evaluations executed per frame, and evaluates visible light sources for facades visible from the point of view of the camera. Finally, we integrate an offline rendering technique, Lightcuts, by adapting it to real-time GPU rendering to further save on rendering time.Even though our technique does not achieve real-time frame rates in a complex scene,it reduces the complexity of the problem enough so that we can hope to achieve such frame rates one day

    Efficient Real-Time Rendering of Building Information Models

    Get PDF
    A Building Information Model (BIM) is a powerful concept, since it allows both 2D-drawings and 3D-models of buildings or facilities to be extracted from the same source of data. Compared to a general 3D-CAD model a BIM is a different kind of representation, since it defines not only geometrical data but also information regarding spatial relations and semantics. However, because of the large number of individual objects and high geometric complexity, 3D-data obtained from a BIM are not easily used for real-time rendering without further processing. In this paper we present a culling system specifically designed for efficient real-time rendering of BIM’s. By utilizing the unique properties of a BIM we can form the required data structures without manual modification or expensive preprocessing of the input data. Using hardware occlusion queries together with additional mechanisms based on specific BIM-data, the presented system achieves good culling efficiency for both indoor and outdoor cases

    Emergency crowd simulation for outdoor environments

    Get PDF
    Cataloged from PDF version of article.We simulate virtual crowds in emergency situations caused by an incident, such as a fire, an explosion, or a terrorist attack. We use a continuum dynamics-based approach to simulate the escaping crowd, which produces more efficient simulations than the agent-based approaches. Only the close proximity of the incident region, which includes the crowd affected by the incident, is simulated. We use a model-based rendering approach where a polygonal mesh is rendered for each agent according to the agent's skeletal motion. To speed up the animation and visualization, we employ an offline occlusion culling technique. We animate and render a pedestrian model only if it is visible according to the static visibility information computed. In the pre-processing stage, the navigable area is decomposed into a grid of cells and the from-region visibility of these cells is computed with the help of hardware occlusion queries. (C) 2009 Elsevier Ltd. All rights reserved

    Implementing software occlusion culling for real-time applications

    Get PDF
    The visualization of complex virtual scenes can be significantly accelerated by applying Occlusion Culling. In this work we introduce a variant of the Hierarchical Occlusion Map method to be used in Real-Time applications. To avoid using real objects geometry we generate specialized conservative Occluders based on Axis Aligned Bounding Boxes which are converted into coplanar quads and then rasterized in CPU using a downscaled Depth Buffer. We implement this method in a 3D scene using a software occlusion map rasterizer module specifically optimized to rasterize Occluder quads into a Depth Buffer. We demonstrate that this approach effectively increases the number of occluded objects without generating significant runtime overhead.Eje: Workshop Computación gráfica, imágenes y visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Galactica, a digital planetarium that explores the solar system and the milky way

    Get PDF
    This paper describes a new Digital Planetarium system that allows interactive visualization of astrophysical data and phenomena in an immersive virtual reality (VR) setting. Taking advantage of the Cave Hollowspace at Lousal infrastructure, we have created a large-scale immersive VR experience, by adopting its Openscenegraph (OSG) based VR middleware, as a basis for our development. Since our goal was to create an underlying system that could scale to arbitrary large astrophysical datasets, we have splitted our architecture in offline and runtime subsystems, where the former is responsible for parsing the available data sources into a SQL database, which will then be used by the runtime system to generate the entire VR scene graph environment, for the interactive user experience. Real-time computer graphics requirements lead us to adopt some visualization optimization techniques, namely, GPU calculation of textured billboards representing stars, view-frustum culling with octree organization of scene objects and object occlusion culling, to keep the user experience within the interactivity limits. We have built a storyboard (the “Galatica” storyboard), which describes and narrates a visual and aural user experience, while navigating through the Solar System and the Milky Way, and which was used to measure and evaluate the performance of our visualization acceleration algorithms. The system was tested with an available dataset of the complete Milky Way (including the solar system), featuring 100.639 textured billboards representing stars and additional 104.328 polygons, representing constellations and planets of the solar system. We have computed the frame rate, GPU traverse time, Cull traverse time and Draw traverse time for three visualization conditions: (A) using standard OSG view frustum culling technique; (B) using view frustum culling with and our octree organizing the scene’s objects; (C) using view frustum culling with our octree organizing the scene’s objects and our occlusion culling algorithm. We have generally concluded that our octree organization and octree plus object culling techniques out-performs the standard OSG view frustum culling, when around half or less than half of the dataset is in view of the virtual camera.info:eu-repo/semantics/acceptedVersio

    Distributed visibility servers

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (leaves 54-55).This thesis describes techniques for computing conservative visibility exploiting viewpoint prediction, spatial coherence and remote visibility servers to increase the rendering performance of a walk through client. Identifying visible (or partially visible) geometry from an instantaneous viewpoint of a 3-D computer graphics model in real-time is an important problem in interactive computer graphics. Since rendering is an expensive process (due to transformations, lighting and scan-conversion), successfully identifying the exact set of visible geometry before rendering increases the frame-rate of real-time applications. However, computing this exact set is computationally intensive and prohibitive in real-time for large models. For many densely occluded environments that contain a small number of large occluding objects (such as buildings, billboards and houses), efficient conservative visibility algorithms have been developed to identify a set of occluded objects in real-time. These algorithms are conservative since they do not identify the exact set of occluded geometry. While visibility algorithms that identify occluded geometry are useful in increasing the frame-rate of interactive applications, previous techniques have not attempted to utilize a set of workstations connected via a local area network as an external compute resource. We demonstrated a configuration with one local viewer and two remote servers.by Eric A. Brittain.S.M

    Visualization of urban environments

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2007.Thesis (Ph. D.) -- Bilkent University, 2007.Includes bibliographical references leaves 108-118Modeling and visualization of large geometric environments is a popular research area in computer graphics. In this dissertation, a framework for modeling and stereoscopic visualization of large and complex urban environments is presented. The occlusion culling and view-frustum culling is performed to eliminate most of the geometry that do not contribute to the user’s final view. For the occlusion culling process, the shrinking method is employed but performed using a novel Minkowski-difference-based approach. In order to represent partial visibility, a novel building representation method, called the slice-wise representation is developed. This method is able to represent the preprocessed partial visibility with huge reductions in the storage requirement. The resultant visibility list is rendered using a graphics-processing-unit-based algorithm, which perfectly fits into the proposed slice-wise representation. The stereoscopic visualization depends on the calculated eye positions during walkthrough and the visibility lists for both eyes are determined using the preprocessed occlusion information. The view-frustum culling operation is performed once instead of two for both eyes. The proposed algorithms were implemented on personal computers. Performance experiments show that, the proposed occlusion culling method and the usage of the slice-wise representation increase the frame rate performance by 81 %; the graphics-processing-unit-based display algorithm increases it by an additional 315 % and decrease the storage requirement by 97 % as compared to occlusion culling using building-level granularity and not using the graphics hardware. We show that, a smooth and real-time visualization of large and complex urban environments can be achieved by using the proposed framework.Yılmaz, TürkerPh.D

    Conservative From-Point Visibility.

    Get PDF
    Visibility determination has been an important part of the computer graphics research for several decades. First studies of the visibility were hidden line removal algorithms, and later hidden surface removal algorithms. Today’s visibility determination is mainly concentrated on conservative, object level visibility determination techniques. Conservative methods are used to accelerate the rendering process when some exact visibility determination algorithm is present. The Z-buffer is a typical exact visibility determination algorithm. The Z-buffer algorithm is implemented in practically every modern graphics chip. This thesis concentrates on a subset of conservative visibility determination techniques. These techniques are sometimes called from-point visibility algorithms. They attempt to estimate a set of visible objects as seen from the current viewpoint. These techniques are typically used with real-time graphics applications such as games and virtual environments. Concentration is on the view frustum culling and occlusion culling. View frustum culling discards objects that are outside of the viewable volume. Occlusion culling algorithms try to identify objects that are not visible because they are behind some other objects. Also spatial data structures behind the efficient implementations of view frustum culling and occlusion culling are reviewed. Spatial data structure techniques like maintaining of dynamic scenes and exploiting spatial and temporal coherences are reviewed.1. Introduction.............................................................................................................1 2. Visibility Problem...................................................................................................3 3. Scene Organization...............................................................................................10 3.1. Bounding Volume Hierarchies and Scene Graphs.................................10 3.2. Spatial Data Structures ...............................................................................13 3.3. Regular Grids...............................................................................................14 3.4. Quadtrees and Octrees ...............................................................................15 3.5. KD-Trees.......................................................................................................20 3.6. BSP-Trees......................................................................................................23 3.7. Exploiting Spatial and Temporal Coherence ..........................................27 3.8. Dynamic Scenes...........................................................................................30 3.9. Summary ......................................................................................................34 4. View Frustum Culling .........................................................................................35 4.1. View Frustum Construction ......................................................................36 4.2. View Frustum Test......................................................................................37 4.3. Hierarchical View Frustum Culling .........................................................41 4.4. Optimizations ..............................................................................................42 4.5. Summary ......................................................................................................44 5. Occlusion Culling .................................................................................................45 5.1. Fundamental Concepts...............................................................................45 5.2. Occluder Selection.......................................................................................46 5.3. Hardware Occlusion Queries....................................................................49 5.4. Object-Space Methods ................................................................................50 5.5. Image-Space Methods ................................................................................55 5.6. Summary ......................................................................................................64 6. Conclusion.............................................................................................................66 References .................................................................................................................... 7
    • …
    corecore