120 research outputs found

    OpenGL-assisted Visibility Queries of Large Polygonal Models

    Get PDF
    Veröffentlichung des Wilhelm-Schickard-Institut für Informatik Universität Tübinge

    Massive model visualization: An investigation into spatial partitioning

    Get PDF
    The current generation of visualization software is incapable of handling the interactive rendering of arbitrarily large models. While many solutions have been proposed for Massive Model Visualization, very few are able to achieve the full capabilities needed for a computer visualization solution. In most cases this is due to overly complex approaches that, while achieving impressive frame rates, make it virtually impossible to implement features like part manipulation. What is needed is a simple approach with rendering performance bounded by screen complexity not model size, with primitive traceability to the original model to facilitate part manipulation, and capability to be modified in near-real-time. This thesis introduces MMDr, a simple system to achieve interactive frame rates on extremely large data sets, while retaining support for most if not all the features required for a computer visualization solution

    Development and Application of Computer Graphics Techniques for the Visualization of Large Geo-Related Data-Sets

    Get PDF
    Ziel dieser Arbeit war es, Algorithmen zu entwickeln und zu verbessern, die es gestatten, grosse geographische und andere geo-bezogene Datensätze mithilfe computergraphischer Techniken visualisieren zu können. Ein Schwerpunkt war dabei die Entwicklung neuer kamera-adaptiver Datenstrukturen für digitale Höhenmodelle und Rasterbilder. In der Arbeit wird zunächst ein neuartiges Multiresolutionmodell für Höhenfelder definiert. Dieses Modell braucht nur sehr wenig zusätzlichen Speicherplatz und ist geeignet, interaktive Anpassungsraten zu gewährleisten. Weiterhin werden Ansätze zur schnellen Bestimmung sichtbarer und verdeckter Teile einer computergraphischen Szene diskutiert, um die Bewegung in grossen und ausgedehnten Szenen wie Stadtmodellen oder Gebäuden zu beschleunigen. Im Anschluss daran werden einige Problemstellungen im Zusammenhang mit Texture Mapping erörtert, so werden zum Beispiel eine neue beobachterabhängige Datenstruktur für Texturdaten und ein neuer Ansatz zur Texturfilterung vorgestellt. Die meisten dieser Algorithmen und Verfahren wurden in ein interaktives System zur Geländevisualisierung integriert, das den Projektnamen 'FlyAway' hat und im letzten Kapitel der Arbeit beschrieben wird

    Conservative occlusion culling for urban visualization using a slice-wise data structure

    Get PDF
    Cataloged from PDF version of article.In this paper, we propose a framework for urban visualization using a conservative from-region visibility algorithm based on occluder shrinking. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is buildings, process these partially visible buildings as if they are completely visible. To address the problem of partial visibility, we propose a data structure, called slice-wise data structure, that represents buildings in terms of slices parallel to the coordinate axes. We observe that the visible parts of the objects usually have simple shapes. This observation establishes the base for occlusion-culling where the occlusion granularity is individual slices. The proposed slice-wise data structure has minimal storage requirements. We also propose to shrink general 3D occluders in a scene to find volumetric occlusion. Empirical results show that significant increase in frame rates and decrease in the number of processed polygons can be achieved using the proposed slice-wise occlusion-culling as compared to an occlusion-culling method where the granularity is individual buildings. © 2007 Elsevier Inc. All rights reserved

    Large Model Visualization : Techniques and Applications

    Get PDF
    The size of datasets in scientific computing is rapidly increasing. This increase is caused by a boost of processing power in the past years, which in turn was invested in an increase of the accuracy and the size of the models. A similar trend enabled a significant improvement of medical scanners; more than 1000 slices of a resolution of 512x512 can be generated by modern scanners in daily practice. Even in computer-aided engineering typical models eas-ily contain several million polygons. Unfortunately, the data complexity is growing faster than the rendering performance of modern computer systems. This is not only due to the slower growing graphics performance of the graphics subsystems, but in particular because of the significantly slower growing memory bandwidth for the transfer of the geometry and image data from the main memory to the graphics accelerator. Large model visualization addresses this growing divide between data complexity and rendering performance. Most methods focus on the reduction of the geometric or pixel complexity, and hence also the memory bandwidth requirements are reduced. In this dissertation, we discuss new approaches from three different research areas. All approaches target at the reduction of the processing complexity to achieve an interactive visualization of large datasets. In the second part, we introduce applications of the presented ap-proaches. Specifically, we introduce the new VIVENDI system for the interactive virtual endoscopy and other applications from mechanical engineering, scientific computing, and architecture.The size of datasets in scientific computing is rapidly increasing. This increase is caused by a boost of processing power in the past years, which in turn was invested in an increase of the accuracy and the size of the models. A similar trend enabled a significant improvement of medical scanners; more than 1000 slices of a resolution of 512x512 can be generated by modern scanners in daily practice. Even in computer-aided engineering typical models eas-ily contain several million polygons. Unfortunately, the data complexity is growing faster than the rendering performance of modern computer systems. This is not only due to the slower growing graphics performance of the graphics subsystems, but in particular because of the significantly slower growing memory bandwidth for the transfer of the geometry and image data from the main memory to the graphics accelerator. Large model visualization addresses this growing divide between data complexity and rendering performance. Most methods focus on the reduction of the geometric or pixel complexity, and hence also the memory bandwidth requirements are reduced. In this dissertation, we discuss new approaches from three different research areas. All approaches target at the reduction of the processing complexity to achieve an interactive visualization of large datasets. In the second part, we introduce applications of the presented ap-proaches. Specifically, we introduce the new VIVENDI system for the interactive virtual endoscopy and other applications from mechanical engineering, scientific computing, and architecture

    Efficient multiple occlusion queries for scene graph systems

    Get PDF
    Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs. In this paper, we propose a new technique for scene graph traversal optimized for efficient use of occlusion queries. Our approach uses several Occupancy Maps to organize the scene graph traversal. During traversal hierarchical occlusion culling, view frustrum culling and rendering is performed. The occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry

    Generation of subdivision : hierarchies for efficient occlusion culling of large polygonal models

    Get PDF
    Veröffentlichung des Wilhelm-Schickard-Institut für Informatik Universität Tübinge

    An Approach For Computing Intervisibility Using Graphical Processing U

    Get PDF
    In large scale entity-level military force-on-force simulations it is essential to know when one entity can visibly see another entity. This visibility determination plays an important role in the simulation and can affect the outcome of the simulation. When virtual Computer Generated Forces (CGF) are introduced into the simulation these intervisibilities must now be calculated by the virtual entities on the battlefield. But as the simulation size increases so does the complexity of calculating visibility between entities. This thesis presents an algorithm for performing these visibility calculations using Graphical Processing Units (GPU) instead of the Central Processing Units (CPU) that have been traditionally used in CGF simulations. This algorithm can be distributed across multiple GPUs in a cluster and its scalability exceeds that of CGF-based algorithms. The poor correlations of the two visibility algorithms are demonstrated showing that the GPU algorithm provides a necessary condition for a Fair Fight when paired with visual simulations

    Tighter bounding volumes for better occlusion culling performance

    Get PDF
    Bounding volumes are used in computer graphics to approximate the actual geometric shape of an object in a scene. The main intention is to reduce the costs associated with visibility or interference tests. The bounding volumes most commonly used have been axis-aligned bounding boxes and bounding spheres. In this paper, we propose the use of discrete orientation polytopes (\kdops) as bounding volumes for the specific use of visibility culling. Occlusion tests are computed more accurately using \kdops, but most importantly, they are also computed more efficiently. We illustrate this point through a series of experiments using a wide range of data models under varying viewing conditions. Although no bounding volume works the best in every situation, {\kdops} are often the best, and also work very well in those cases where they are not the best, therefore they provide good results without having to analyze applications and different bounding volumes
    corecore