172 research outputs found

    Fast scalable visualization techniques for interactive billion-particle walkthrough

    Get PDF
    This research develops a comprehensive framework for interactive walkthrough involving one billion particles in an immersive virtual environment to enable interrogative visualization of large atomistic simulation data. As a mixture of scientific and engineering approaches, the framework is based on four key techniques: adaptive data compression based on space-filling curves, octree-based visibility and occlusion culling, predictive caching based on machine learning, and scalable data reduction based on parallel and distributed processing. In terms of parallel rendering, this system combines functional parallelism, data parallelism, and temporal parallelism to improve interactivity. The visualization framework will be applicable not only to material simulation, but also to computational biology, applied mathematics, mechanical engineering, and nanotechnology, etc

    Time-critical multiresolution rendering of large complex models

    Get PDF
    Very large and geometrically complex scenes, exceeding millions of polygons and hundreds of objects, arise naturally in many areas of interactive computer graphics. Time-critical rendering of such scenes requires the ability to trade visual quality with speed. Previous work has shown that this can be done by representing individual scene components as multiresolution triangle meshes, and performing at each frame a convex constrained optimization to choose the mesh resolutions that maximize image quality while meeting timing constraints. In this paper we demonstrate that the nonlinear optimization problem with linear constraints associated to a large class of quality estimation heuristics is efficiently solved using an active-set strategy. By exploiting the problem structure, Lagrange multipliers estimates and equality constrained problem solutions are computed in linear time. Results show that our algorithms and data structures provide low memory overhead, smooth level-of-detail control, and guarantee, within acceptable limits, a uniform, bounded frame rate even for widely changing viewing conditions. Implementation details are presented along with the results of tests for memory needs, algorithm timing, and efficacy.785-803Pubblicat

    CGAMES'2009

    Get PDF

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Massive Model Visualization: A Practical Solution

    Get PDF
    The ever-increasingly complex designs emanating from various companies are leading to a data explosion that is far outstripping the growth in computing processing power. The traditional large model visualization approaches used for rendering these data sets are quickly becoming insufficient, thus leading to a greater adoption of the new massive model visualization approaches designed to handle these arbitrarily sized data sets. Most new approaches utilize GPU occlusion queries that limit the data needed for loading and rendering to only those which can potentially contribute to the final image. By doing so, these approaches introduce disocclusion artifacts that often reduce the quality of the resulting visualization as a camera is maneuvered through the scene. The present research will demonstrate that shader based depth reprojection and OpenGL atomic writes not only increase the performance of an existing system based upon OpenGL occlusion queries, but also reduce the amount of perceived disocclusion artifacts

    Massive model visualization: An investigation into spatial partitioning

    Get PDF
    The current generation of visualization software is incapable of handling the interactive rendering of arbitrarily large models. While many solutions have been proposed for Massive Model Visualization, very few are able to achieve the full capabilities needed for a computer visualization solution. In most cases this is due to overly complex approaches that, while achieving impressive frame rates, make it virtually impossible to implement features like part manipulation. What is needed is a simple approach with rendering performance bounded by screen complexity not model size, with primitive traceability to the original model to facilitate part manipulation, and capability to be modified in near-real-time. This thesis introduces MMDr, a simple system to achieve interactive frame rates on extremely large data sets, while retaining support for most if not all the features required for a computer visualization solution

    Triangle Dropping: An occluded-geometry predictor for energy-efficient mobile GPUs

    Get PDF
    This article proposes a novel micro-architecture approach for mobile GPUs aimed at early removing the occluded geometry in a scene by leveraging frame-to-frame coherence, thus reducing the overall energy consumption. Mobile GPUs commonly implement a Tile-Based Rendering (TBR) architecture that differentiates two main phases: the Geometry Pipeline, where all the geometry of a scene is processed; and the Raster Pipeline, where primitives are rendered in a framebuffer. After the Geometry Pipeline, only non-culled primitives inside the camera’s frustum are stored into the Parameter Buffer, a data structure stored in DRAM. However, among the non-culled primitives there is a significant amount that are rendered but non-visible at all, resulting in useless computations. On average, 60% of those primitives are completely occluded in our benchmarks. Despite TBR architectures use on-chip caches for the Parameter Buffer, about 46% of the DRAM traffic still comes from accesses to such buffer. The proposed Triangle Dropping technique leverages the visibility information computed along the Raster Pipeline to predict the primitives’ visibility in the next frame to early discard those that will be totally occluded, drastically reducing Parameter Buffer accesses. On average, our approach achieves overall 14.5% energy savings, 28.2% energy-delay product savings, and a speedup of 20.2%.This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant no. 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00 (AEI/FEDER, EU), and the ICREA Academia program. D. Corbalán-Navarro has been also supported by a PhD research fellowship from the University of Murcia’s “Plan Propio de Investigación.Peer ReviewedPostprint (author's final draft

    Conservative Visibility Preprocessing Using Extended Projections

    Get PDF
    International audienceVisualisation of very complex environments can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and thus accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation of our preprocessing algorithm demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling

    Spatially-encoded far-field representations for interactive walkthroughs

    Get PDF
    • 

    corecore