45 research outputs found

    Efficient geometric sound propagation using visibility culling

    Get PDF
    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario

    Conservative From-Point Visibility.

    Get PDF
    Visibility determination has been an important part of the computer graphics research for several decades. First studies of the visibility were hidden line removal algorithms, and later hidden surface removal algorithms. Today’s visibility determination is mainly concentrated on conservative, object level visibility determination techniques. Conservative methods are used to accelerate the rendering process when some exact visibility determination algorithm is present. The Z-buffer is a typical exact visibility determination algorithm. The Z-buffer algorithm is implemented in practically every modern graphics chip. This thesis concentrates on a subset of conservative visibility determination techniques. These techniques are sometimes called from-point visibility algorithms. They attempt to estimate a set of visible objects as seen from the current viewpoint. These techniques are typically used with real-time graphics applications such as games and virtual environments. Concentration is on the view frustum culling and occlusion culling. View frustum culling discards objects that are outside of the viewable volume. Occlusion culling algorithms try to identify objects that are not visible because they are behind some other objects. Also spatial data structures behind the efficient implementations of view frustum culling and occlusion culling are reviewed. Spatial data structure techniques like maintaining of dynamic scenes and exploiting spatial and temporal coherences are reviewed.1. Introduction.............................................................................................................1 2. Visibility Problem...................................................................................................3 3. Scene Organization...............................................................................................10 3.1. Bounding Volume Hierarchies and Scene Graphs.................................10 3.2. Spatial Data Structures ...............................................................................13 3.3. Regular Grids...............................................................................................14 3.4. Quadtrees and Octrees ...............................................................................15 3.5. KD-Trees.......................................................................................................20 3.6. BSP-Trees......................................................................................................23 3.7. Exploiting Spatial and Temporal Coherence ..........................................27 3.8. Dynamic Scenes...........................................................................................30 3.9. Summary ......................................................................................................34 4. View Frustum Culling .........................................................................................35 4.1. View Frustum Construction ......................................................................36 4.2. View Frustum Test......................................................................................37 4.3. Hierarchical View Frustum Culling .........................................................41 4.4. Optimizations ..............................................................................................42 4.5. Summary ......................................................................................................44 5. Occlusion Culling .................................................................................................45 5.1. Fundamental Concepts...............................................................................45 5.2. Occluder Selection.......................................................................................46 5.3. Hardware Occlusion Queries....................................................................49 5.4. Object-Space Methods ................................................................................50 5.5. Image-Space Methods ................................................................................55 5.6. Summary ......................................................................................................64 6. Conclusion.............................................................................................................66 References .................................................................................................................... 7

    OpenGL-assisted Visibility Queries of Large Polygonal Models

    Get PDF
    Veröffentlichung des Wilhelm-Schickard-Institut für Informatik Universität Tübinge

    Efficient Real-Time Rendering of Building Information Models

    Get PDF
    A Building Information Model (BIM) is a powerful concept, since it allows both 2D-drawings and 3D-models of buildings or facilities to be extracted from the same source of data. Compared to a general 3D-CAD model a BIM is a different kind of representation, since it defines not only geometrical data but also information regarding spatial relations and semantics. However, because of the large number of individual objects and high geometric complexity, 3D-data obtained from a BIM are not easily used for real-time rendering without further processing. In this paper we present a culling system specifically designed for efficient real-time rendering of BIM’s. By utilizing the unique properties of a BIM we can form the required data structures without manual modification or expensive preprocessing of the input data. Using hardware occlusion queries together with additional mechanisms based on specific BIM-data, the presented system achieves good culling efficiency for both indoor and outdoor cases

    Implementing software occlusion culling for real-time applications

    Get PDF
    The visualization of complex virtual scenes can be significantly accelerated by applying Occlusion Culling. In this work we introduce a variant of the Hierarchical Occlusion Map method to be used in Real-Time applications. To avoid using real objects geometry we generate specialized conservative Occluders based on Axis Aligned Bounding Boxes which are converted into coplanar quads and then rasterized in CPU using a downscaled Depth Buffer. We implement this method in a 3D scene using a software occlusion map rasterizer module specifically optimized to rasterize Occluder quads into a Depth Buffer. We demonstrate that this approach effectively increases the number of occluded objects without generating significant runtime overhead.Eje: Workshop Computación gráfica, imágenes y visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Interactive ray tracing of massive and deformable models

    Get PDF
    Ray tracing is a fundamental algorithm used for many applications such as computer graphics, geometric simulation, collision detection and line-of-sight computation. Even though the performance of ray tracing algorithms scales with the model complexity, the high memory requirements and the use of static hierarchical structures pose problems with massive models and dynamic data-sets. We present several approaches to address these problems based on new acceleration structures and traversal algorithms. We introduce a compact representation for storing the model and hierarchy while ray tracing triangle meshes that can reduce the memory footprint by up to 80%, while maintaining high performance. As a result, can ray trace massive models with hundreds of millions of triangles on workstations with a few gigabytes of memory. We also show how to use bounding volume hierarchies for ray tracing complex models with interactive performance. In order to handle dynamic scenes, we use refitting algorithms and also present highly-parallel GPU-based algorithms to reconstruct the hierarchies. In practice, our method can construct hierarchies for models with hundreds of thousands of triangles at interactive speeds. Finally, we demonstrate several applications that are enabled by these algorithms. Using deformable BVH and fast data parallel techniques, we introduce a geometric sound propagation algorithm that can run on complex deformable scenes interactively and orders of magnitude faster than comparable previous approaches. In addition, we also use these hierarchical algorithms for fast collision detection between deformable models and GPU rendering of shadows on massive models by employing our compact representations for hybrid ray tracing and rasterization

    Fast and Accurate Visibility Preprocessing

    Get PDF
    Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently

    Conservative Visibility Preprocessing Using Extended Projections

    Get PDF
    International audienceVisualisation of very complex environments can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and thus accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation of our preprocessing algorithm demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling

    Accelerating Virtual Walkthrough with Visual Culling Techniques

    Get PDF
    Abstract-Virtual walkthrough application allows users to navigate and immerse in the generated 3D environment with computer graphics assist. The 3D environment requires a large amount of geometry to make it look realistic. When the number of geometry increase, the performance of the application will become slower. Consequently, it creates a conflict between the needs of realistic and real time. In this paper, we discuss the implementation of visual culling techniques such as view frustum culling, back face culling and occlusion culling in the virtual walkthrough application. We render only what we can see during the application runtime and cull away unnecessary geometry. This will accelerate the performance of the system. Without the culling techniques implemented in virtual reality application such as virtual walkthrough, the system has to allocate a large space of memory to store the geometry data. We have tested these techniques to the Ancient Malacca data. With the visual culling techniques implemented, the virtual walkthrough system can work in real time mode without scarifying realism factor

    Interactive Sound Propagation for Massive Multi-user and Dynamic Virtual Environments

    Get PDF
    Hearing is an important sense and it is known that rendering sound effects can enhance the level of immersion in virtual environments. Modeling sound waves is a complex problem, requiring vast computing resources to solve accurately. Prior methods are restricted to static scenes or limited acoustic effects. In this thesis, we present methods to improve the quality and performance of interactive geometric sound propagation in dynamic scenes and precomputation algorithms for acoustic propagation in enormous multi-user virtual environments. We present a method for finding edge diffraction propagation paths on arbitrary 3D scenes for dynamic sources and receivers. Using this algorithm, we present a unified framework for interactive simulation of specular reflections, diffuse reflections, diffraction scattering, and reverberation effects. We also define a guidance algorithm for ray tracing that responds to dynamic environments and reorders queries to minimize simulation time. Our approach works well on modern GPUs and can achieve more than an order of magnitude performance improvement over prior methods. Modern multi-user virtual environments support many types of client devices, and current phones and mobile devices may lack the resources to run acoustic simulations. To provide such devices the benefits of sound simulation, we have developed a precomputation algorithm that efficiently computes and stores acoustic data on a server in the cloud. Using novel algorithms, the server can render enhanced spatial audio in scenes spanning several square kilometers for hundreds of clients in realtime. Our method provides the benefits of immersive audio to collaborative telephony, video games, and multi-user virtual environments.Doctor of Philosoph
    corecore