6,666 research outputs found

    TetSplat: Real-time Rendering and Volume Clipping of Large Unstructured Tetrahedral Meshes

    Get PDF
    We present a novel approach to interactive visualization and exploration of large unstructured tetrahedral meshes. These massive 3D meshes are used in mission-critical CFD and structural mechanics simulations, and typically sample multiple field values on several millions of unstructured grid points. Our method relies on the pre-processing of the tetrahedral mesh to partition it into non-convex boundaries and internal fragments that are subsequently encoded into compressed multi-resolution data representations. These compact hierarchical data structures are then adaptively rendered and probed in real-time on a commodity PC. Our point-based rendering algorithm, which is inspired by QSplat, employs a simple but highly efficient splatting technique that guarantees interactive frame-rates regardless of the size of the input mesh and the available rendering hardware. It furthermore allows for real-time probing of the volumetric data-set through constructive solid geometry operations as well as interactive editing of color transfer functions for an arbitrary number of field values. Thus, the presented visualization technique allows end-users for the first time to interactively render and explore very large unstructured tetrahedral meshes on relatively inexpensive hardware

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end

    Efficient Real-Time Rendering of Building Information Models

    Get PDF
    A Building Information Model (BIM) is a powerful concept, since it allows both 2D-drawings and 3D-models of buildings or facilities to be extracted from the same source of data. Compared to a general 3D-CAD model a BIM is a different kind of representation, since it defines not only geometrical data but also information regarding spatial relations and semantics. However, because of the large number of individual objects and high geometric complexity, 3D-data obtained from a BIM are not easily used for real-time rendering without further processing. In this paper we present a culling system specifically designed for efficient real-time rendering of BIM’s. By utilizing the unique properties of a BIM we can form the required data structures without manual modification or expensive preprocessing of the input data. Using hardware occlusion queries together with additional mechanisms based on specific BIM-data, the presented system achieves good culling efficiency for both indoor and outdoor cases

    Integrating Occlusion Culling and Hardware Instancing for Efficient Real-Time Rendering of Building Information Models

    Get PDF
    This paper presents an efficient approach for integrating occlusion culling and hardware instancing. The work is primarily targeted at Building Information Models (BIM), which typically share characteristics addressed by these two acceleration techniques separately – high level of occlusion and frequent reuse of building components. Together, these two acceleration techniques complement each other and allows large and complex BIMs to be rendered in real-time. Specifically, the proposed method takes advantage of temporal coherence and uses a lightweight data transfer strategy to provide an efficient hardware instancing implementation. Compared to only using occlusion culling, additional speedups of 1.25x-1.7x is achieved for rendering large BIMs received from real-world projects. These speedups are measured in viewpoints that represents the worst case scenarios in terms of rendering performance when only occlusion culling is utilized

    Techniques and algorithms for immersive and interactive visualization of large datasets

    Get PDF
    Advances in computing power have made it possible for scientists to perform atomistic simulations of material systems that range in size, from a few hundred thousand atoms to one billion atoms. An immersive and interactive walkthrough of such datasets is an ideal method for exploring and understanding the complex material processes in these simulations. However rendering such large datasets at interactive frame rates is a major challenge. A scalable visualization platform is developed that is scalable and allows interactive exploration in an immersive, virtual environment. The system uses an octree based data management system that forms the core of the application. This reduces the amount of data sent to the pipeline without a per-atom analysis. Secondary algorithms and techniques such as modified occlusion culling, multiresolution rendering and distributed computing are employed to further speed up the rendering process. The resulting system is highly scalable and is capable of visualizing large molecular systems at interactive frame rates on dual processor SGI Onyx2 with an InfinteReality2 graphics pipeline
    • …
    corecore